Seafile is an advanced Open Source collaborative cloud storage application written in Python with file sharing and syncing support, team collaboration and privacy protection using client side encryption.
This installation procedure tested on CentOS 6.4 64-bit system, but can also be used on other Linux distributions with the specification that init start-up scripts differ from one distribution to another. Enter the name for MySQL user of seafile: = seafile ( if you created other username enter that username) and seafile MySQL user password. After Seafile Server successfully installs, it will generate some useful information such as what ports needs to be open on your Firewall to allow external connection and what scripts to handle in order to start the server. NOTE: If you changed Seafile standard ports on installation process update your Firewall iptables rules accordingly. The first time you start seahub.sh script, create an administrative account for Seafile Server using your email address and choose a strong password for admin account, especially if you are deploying this configuration in a production environment. Add the following content on this init script – If Seafile is installed on other system user make sure to update user and paths accordingly on su – $USER -c lines. In my upcoming article, I will cover how to install Seafile client on Linux and Windows systems and also show you how to connect to Seafile Server. I'am a computer addicted guy, a fan of open source and linux based system software, have about 4 years experience with Linux distributions desktop, servers and bash scripting.
Zabbix is a free and open source monitoring tool which is used to monitor and track the availability & performance of servers, network devices and other IT assets which are on network.
Monitor everything which is on network like Servers, applications, database instance and network devices. Using low-level discovery rules, zabbix can discover Vmware hypervisor ( ESXI ) and Virtual machines.
Zabbix is Open Source so no cost involved and can be deployed on small and large environment. Zabbix package is not available in the default yum repository, so we will enable zabbix  and epel repository using below commands. Use the below command to install rpm package of Zabbix server, Database Sever (MariaDB) , Web Server ( http) and PHP.
Now create the Zabbix Database (zabbix_db) and database user (zabbix_user) and grant all privileges to the user on the Zabbix database. If Not, Please update it and then start Zabbix Server and then try to access it from web Browser. Geographical Diversity: With a global audience and a global demand for your content, we can place data physically closer to consumers by storing it at facilities in their country or region. Performance: Storage solutions in the cloud are designed to scale dramatically upwards to support events that may see thousands or millions more consumers accessing content over a short period of time.
Cost: As clouds get larger, the per unit costs of storage go down, primarily due to Economies of Scale.
Flexibility: The pay as you use model takes away concerns for capacity planning and wastage of resources due to cyclical variations in usage.
Backup: The cloud is perceived to be a viable replacement for traditional backup solutions, boasting greater redundancy and opportunities for cost savings. In the consumer market, cloud backup services like Dropbox, Microsoft SkyDrive and Google Drive offer a service that takes part of your local hard drive and syncs them up with the cloud. In the Enterprise Space, Gartner’s magic quadrant for enterprise backup solutions featured several pureplay Cloud backup providers including Asigra, Acronis and i365. Content Distribution: Cloud content distribution network (CDN) services are large networks of servers that are distributed across datacenters over the internet. Enterprise Content Management Companies are gradually turning to the cloud to manage Organizational compliance requirements such as eDiscovery and Search. Cloud Application Storage: The trend towards hosting applications in the cloud is driving innovations in how  we consume and utilize storage.
Performance Enhanced Storage: Performance enhanced storage emulates storage running on a SAN and products like Amazon Elastic Block Storage provide persistent, block-level network attached storage that can be attached to virtual machines running and in cases VMs can even boot directly from these hosts.
Data Analytics Support: Innovative distributed file systems that support super-fast processing of data have been adapted to the cloud. Front End: This layer is exposed to end users and typically exposes APIs that allow access to the storage. REST APIs: REST or Representational State Transfer is a stateless Web Architecture model that is built upon communications between clients and servers.
File-based Protocols: Protocols such as NFS and CIFS are supported by vendors like Nirvanix, Cleversafe and Zetta*. Back End: The back end layer is where the actual physical hardware is implemented and we refer to read and write instructions in the Hardware Abstraction Layer. Management Layer: This may supporting scripting and reporting capabilities to enhance automation and provisioning of storage. Backup Layer: The cloud back end layer can be exposed directly to API calls from Snapshot and Backup services. DR (Virtualization) Layer: DR service providers can attach storage to a Virtual hypervisor, enabling cloud storage data to be accessed by Virtual Hosts that are activated in a DR scenario. This brief post provided a simple snapshot of cloud storage, it’s various uses as well as a number of common applications for storage in the cloud.
Volume: Big Data refers to large amounts of data that is generated across a variety of applications and industries.
Variety: With a wide and disparate number of sources of Big Data, the data can be structured (like a database), semi-structured (indexed) or unstructured.
Velocity: The data is generated at high speeds, and needs to be processed in relatively short durations (seconds). Big data conveys an important shift in how we interpret data to look for meaningful things in the world.
The ability to perform lightning speed computational processes on extremely large sets of data that were also subject to frequent changes.
The problem with conventional approaches towards managing data was that the data primarily had to be structured. The problem we encounter when it comes to handling Big Data is that the data is subject to frequent change. With a Relational system, we need to define a structure or schema ahead of time.
In the previous section, we explored the need for corporations and organizations to manage increasingly large amounts of data as well as the ineffectiveness of existing Database Management systems in dealing with these large data sets .
Distributed Cluster Architecture: Hadoop comprises of a collection of nodes (Master + Workers).
The HDFS layer: The Hadoop Distributed File System maintains consistency of data distributed across a large number of data nodes.
The Map Reduce layer: The Processing logic of MapReduce consists of the Map function and the Reduce function. Additional Components: Hadoop is commonly implemented with a number of additional services.
Other tools: A number of other tools are available for managing Hadoop and include HCatalog, a table management service for access to Hadoop data and the Ambari monitoring and management console. Hadoop Distributions:  Hadoop was originally designed to work on the Apache platform and has very recently (Circa.
Now that we’ve covered some basics on Big Data, we are now ready to explore common implementations in the government sector around the world. Search Engine Analytics: A pressing need to search vast amounts of data made publicly available by recent policy changes has seen a great practical application for Hadoop and Hive. Non-exclusive policies: Bodies or groups that do not have the capabilities to go digital must not be penalized.
Consolidation of processes: A number of governments are moving closer towards a single consolidated online presence.
Defense: The US Department of Defense listed 9 major projects in a March 2012 Whitehouse paper on the adoption of Big Data anlysis across the government. Big Data is transformative in the sense that it provides us with an opportunity perform deep meaningful analysis of information beyond what is normally available. Greater Transparency: Big data has the opportunity to provide greater access to data by making data more frequently accessible to greater constituencies of people. More opportunities for enhancing performance: By providing users with access to not only greater amounts of data, but also greater varieties of data, we create more opportunities to identify patterns and trends by connecting information from more sources, leading us to capitalize on opportunities and expose threats. Better Decisions: By allowing systems to collect more data and then applying Big Data analysis techniques to draw meaningful information from these data sets, we can make better, more timely and informed decisions. Greater segmentation of stakeholders: By exposing our analytics to greater pools of raw data, we can find interesting ways to segment our constituents, identifying unique patterns at a more granular level and devising solutions and services to meet these needs. Big Brother: Governments are sensitive to the perception of using data to investigate and monitor the individual   and the storing and analysis of data by government has long had a strong reaction in the public eye.
Implementation Hurdles: Implementing Big Data requires a holistic effort beyond adopting a new technology.
Focus first on requirements: Decision makers are encouraged to look for the low hanging fruit, in other words, situations that have a pressing need for Big Data solutions. Start small: Care should be taken to manage stakeholder expectations before Big Data takes on the image of a large disruptive technology i the workplace.
Reuse infrastructure: Big Data technologies can happily coexist on conventional infrastructure. Obtain high-level support: Big Data sees the greatest benefits in terms of performance and cost savings when combining different systems. Address Ethical Issues first: A major obstacle to adopting Big Data is the pressure from groups of individuals who wish not to be tracked, monitored or singled out. Nginx is a powerful web server that can be deployed in combination with other services such as Fast CGI or Apache backends to provide scalable and efficient web infrastructures. Cost: The key to running an effective cloud services infrastructure is to ensure that you have a tight cloud implementation that leaves little wastage of excess resources with the help of cloud features such as auto scaling. For this post, we will choose to implement the Nginx AMI which can be provisioned either from the AWS Web Management console or scripted and launched via Amazon’s EC2 tools. We can also SSH into the host and run the following command to verify the status of the host. Our next task is to configure Nginx and all the necessary components we require to serve up our web content. I provisioned a Security group in AWS known as Webserver, enabling SSH, HTTP and HTTPS traffic from all IPs.


Nginx should be configured with only required  components in order to minimize it’s memory footprint. Now we should test our web server to ensure that our configuration is working, by browsing to our public web server online.
At this point, it’s wise to quickly run a snapshot before delving into the configuration files. Nginx allows administrators to perform a considerable number of tweaks to optimize performance based on our underlying system resources. CPU and Memory Utilization – Nginx is already very efficient with how it utilizes CPU and Memory. Disk Performance – To minimize IO bottlenecks on the Disk subsystem, we can tweak Nginx to minimize disk writes and ensure that Nginx does not resort to on-disk files due to memory limitations.
Open File Cache – The open file cache directive stores Open file descriptors, including information of the file, location and size.
OS File Caching – We can define parameters around the size of the cache used by the underlying server OS to cache frequently accessed disk sectors. Time outs – Timeouts determine how long the server maintains a connection and should be configured optimally to conserve resources on the server. Data compression – We can use Gzip to compress our static data, reducing the size of the TCP packet payloads that will need to traverse the web to get to the client computer. TCP Maximum Segment Lifetime (MSL) – The MSL defines how long the server should wait for stray packets after closing a connection and this value is set to 60 by default on a Linux server. Increase System Limits – Specific parameters such as the number of open file parameters and the number of available ports to serve connections can be increased. Prior to rolling out any changes into production, it’s a good idea to first test our configuration files.
A number of governments have implemented roadmaps and strategies that ultimately require their ministries, departments and agencies to default to Cloud computing solutions first when evaluating IT implementations.
Making Data publicly available  – The UK Government is readily exploiting opportunities to make available the Terabytes of public data that can be used to develop useful applications. Government Security Certification – A 2012 Government Cloud Survey conducted by KPMG listed security as the greatest concern for governments when it comes to cloud adoption and that governments are taking measures to manage security concerns.
The strategic value of cloud computing can be summed up into a number of key elements in government. Enhancing agility of government – Cited as a significant factor in cloud adoption, cloud computing promises rapid provisioning and elasticity of resources, reducing turnaround times on projects. Supporting government policies for the environment – The environmental impact due to reduced data center spending and consumption of energy on cooling has tangible environmental benefits in terms of reduced greenhouse gas emissions and potential reductions in allocations of carbon credits.
Enhancing Transparency of government – Cloud allows the developed of initiatives that can make government records accessible to the public, opening up tremendous opportunities for innovation and advancement.
Efficient utilization of resources – By adopting a pay-for-use approach towards computing, stakeholders are encouraged to architect their applications to be more cost effective. Reduction in spending – Our research indicated this particular element is not considered to be a significant aspect of moving to cloud computing according to technology decision makers, however some of the numbers being bandied about in terms of cost savings are significant (Billions of dollars) and can appeal to any constituency.
Bureaucratic hinderances – when transitioning from legacy systems, data migration and change management can slow down the “on demand” adoption of cloud computing. Cloud Gaps – Applications and services that have specific requirements which are unable to be met by the cloud need to be planned for to ensure that they do not become obsolete. Risks of confidentiality – Isolation has been a long-practiced strategy for securing disparate networks.
There is considerable research that indicates government adoption of cloud computing will accelerate in coming years. Develop Roadmaps:  Before Cloud Computing can reap all of the benefits that it has to offer, governments must first move along a continuum towards adoption. Confidence in Security Capabilities – Demonstration that cloud services can handle the required levels of security across stakeholder constituencies in order to build and establish levels of trust.
Harmonization of Security requirements – Differing security standards will impede and obstruct large-scale interoperability and mobility in a multi-tenanted cloud environment, therefore a common overarching security standard must be developed. Management of Cloud outliers – Identify gaps where Cloud cannot provide adequate levels of service or specialization for specific technologies and application and identify strategies to deal with these outliers.
Development of cloud service metrics such as common units of measurement in order to track consumption across different units of government and allow the incorporation of common metrics into SLAs. Cloud First policies: Implementing policies that mandate all departments across government should consider cloud options first when planning for new IT projects.
The adoption of cloud services holds great promise, but due to the far reaching consequences necessitated by the wide-spread adoption of cloud to achieve objectives such as economies of scale, a comprehensive plan compounded with standardization and transparency become essential elements of success.
The Mirror Service is an essential component when persisting your data into the enterprise database. The Mirror Service behavior is important for the stability of the application and the consistency of the data within the Database.
The Mirror Monitor utility gathers statistics about the Mirror Service behavior and exposes these via standard JMX Mbean. The Mirror Monitor using the GigaSpaces Administration and Monitoring API to receive information about the current status of the replication redo log size of all the IMDG primary instances.
SpaceModeListener - Identify a failure of the primary space and switch monitoring the new primary.
Double clicking any of the values, will display a graph that will be refreshed automatically.
XSCF> showlogs powerSnapshotsWe can take a snapshot of M series servers XSCF either on a remote server or on a USB device locally connected.
First do a system Update, then install all required Python modules using the following commands. After all Python modules are installed create a new system user with a strong password that will be used to host Seafile server configuration and all data on its home directory, then switch to newly user account created. Then login to MySQL database and create three databases, one for every Seafile Server components: ccnet server, seafile server and seahub with a single user for all databases. To install Seafile Server using MySQL database run setup-seafile-mysql.sh initialization script and answer all questions using the following configuration options, after the script verifies the existence of all Python required modules. After the server is successfully started, open a browser and navigate to your server IP address or domain name on port 8000 using HTTP protocol, then login using you admin account created on the above step. After first configuration tests, stop Seafile server and create an init script that will help you manage more easily the entire process, just like any other Linux system daemon processes.
After init file has been created, make sure it has execution permissions and manage the process using start, stop and restart switches. If Seafile is installed on other system user make sure to update user and paths accordingly on su – $USER -c and $HOME lines. If you previously started Seafile on port 8000 make sure all processes are killed, start the server on port 80.
Seafile can happily replace other cloud collaborative and file syncing platforms like public Dropbox, Owncloud, Pydio, OneDrive, etc on your Organization, being designed for better teamwork and full control over your storage with advanced security in user-space.
We use it for our DIY projects to register json payload and visualise it is a very simple way. Also i’m using Azure to host the VM, so could it be something related to their config?
In this post, we will look into some of the most common usages of storage in the cloud and peel back the layers to discover exactly what makes them tick. Both companies are less than 5 years old, offer services predominantly centered around cloud storage and file sharing and have been able to  attract significant amounts of capital from investors. Your data is stored in multiple copies on multiple hard drives on multiple servers in multiple data centers in multiple locations (you get the picture). This dramatically reduces round trip latency, a common complaint for dull Internet performance. The trend for these pay for use services are on the rise, with Dropbox hosting data for in excess of 100 million users within four years of launching their service. Even leading providers such as CommVault and IBM have launched cloud-based backup solutions. On the other end of the spectrum, full-blown collaboration suites such as Microsoft’s Office 365 and Google Apps feature real-time document editing and annotation services. At one point or another, we’ve used CDNs such as Akamai to enhance our Web browsing experience. Vendors such as HP Autonomy and EMC provide services that feature secure encryption and de-duplication of data assets as well as data lifecycle management. Leading the fray are large cloud services providers such as Amazon and Microsoft who have developed cloud storage services to meet specific applications needs.
For example, the Hadoop Distributed File System (HDFS) manages and replicates large blocks of data across a network of computing nodes, to facilitate the parallel processing of Big Data. If we were to peek under the hood, we would see a basic architecture that is pretty similar to the diagram above.
A number of protocols are constantly being introduced to increase the supportability of cloud systems and include Web Service Front-ends using REST principles, file-based front ends and even iSCSI support. For example Amazon’s Elastic Block Store (EBS) service supports a incremental snapshot feature. For example the i365 cloud storage service automates the process of converting backups of server snapshots into a virtual DR environment in minutes. We plan to review the impact of Big Data in the Government and common applications of technologies to manage this issue. At the time of this article, the order of magnitude from 100s of GB to Terabytes and Petabytes of data could easily qualify under the definition. The advent of Social Networking and E-commerce brought about a need for suppliers of rapidly non-differentiated online services to learn about the behavior of online users in order to tailor a superior user experience.
Picture a database that  supports the catalog of a conventional online e-commerce website and holds hundreds and thousands of items. That’s not a big problem with an Online Shopping Cart database, since most items have the same attributes as described above.
Rolling out schema changes for a database is a potentially complex, time and resource-intensive process and has a definite performance impact on the database during the change.
In this section, we will briefly cover the most commonly deployed solutions in the industry for Big Data management.


Hadoop was largely developed at Yahoo and named after the toy elephant of a researcher’s son. Large files are distributed across the cluster and managed via a metadata server known as the Primary Namenode. The Map function applies a transformation to a list and returns an attribute value pair (ie.
October 2012) been released by Microsoft as Microsoft HDInsight Server for Windows and the Windows Azure HDInsight Service for the cloud. Large governments led the charge for Big Data implementations, with an excess of 160 large Big Data programmes being pursued by the US Government alone. For example, the UK government uses Hadoop to pre-populate relevant and possible search terms when a user types into a search box. A number of  large government bodies have been tasked with identifying large volume transactions (>100,000 a year) that can be digitized. The US National Institute for Health (NIH) is developing a the Cancer Imaging Archive, an image data-sharing service that leverages imaging technology used in assessment of therapeutic responses to treatment. Major applications involved Artificial Intelligence, Machine Learning, Image and Video recognition and Anomaly detection. This results in an overall enhanced quality of decision making that could potentially lead to greater performance. For example, we can use Big Data to analyze the Elderly living in a particular part of a city that are alone, have a unique medical condition requiring specialist care, and use this information to manage staffing and service avalability for these users. However, the enactment of information transparency legislation and freedom of information policies, together with the formation of public watchdog sites have led to an encouraging environment for governments to pursue Big Data. Focusing on small pilot projects that show tangible and visible benefits are the best way to go and often pave the way for much larger projects down the line.
In fact, Big Data implementations can happily coexist with Relational Database Systems in existing IT environments.
But with this type of endeavor comes greater complexity and risks from differing priorities.
We will need to first access the Nginx AMI appliance page in the Amazon Marketplace to accept the terms and conditions.
Depending on how you choose to secure your deployment, you may choose to deploy a single management host with SSH access and then only enable SSH from that host into the Web server.
You will be asked to select which components to install, afterwhich the script will install all prerequisite packages. However, we can tweak several parameters based on the  type of workload that we plan to serve.
We can start with a value of 1024 and tweak our figures based on results for optimal performance.
You can also turn off Diffie-Hellman cryptography and move to a quicker cipher if you’re not subject to PCI standards. If the request body is more than the buffer, then the entire request body or some part is written in a temporary file.
For the overwhelming majority of requests it is completely sufficient to have a buffer size of 1K.
If size is greater the given one, then the client gets the error “Request Entity Too Large” (413). Error logs should not be set too Low unless it is our intention to capture every single HTTP error. The reference links below provide a wealth of additional information on how to deploy Nginx under varying scenarios, please give them a read.. In this article, we evaluate the adoption of cloud computing in government and discuss some of the positive and negative implications of moving government IT onto the cloud. Vendors leading the charge include Microsoft’s Office 365 for Government, with successful deployments including Federal Agencies like the USDA, Veterans Affairs, FAA and the EPA as well as the Cities of Chicago, New York and Shanghai.
The recent release of Met Office UK Weather information to the public via Microsoft Azure’s cloud hosting platform. For example, the US General Services Administration subjects each successful cloud vendor to a battery of tests that include an assessment of access controls. The transfer of CAPEX to OPEX also smoothes out cash-flow concerns  in an environment of tight budgets.
We need to develop cloud-aligned approaches towards IT provisioning, operations and management. For that very purpose, a number of governments have developed roadmaps to aid in developing a course of progression towards the cloud. The Mirror Service delegates the IMDG activities into the database in a reliable asynchronous manner, allowing the application to access the data stored in-memory without having the database as part of the critical path of the transaction. With large scale application, you might want to monitor the Mirror Service behavior in real time.
You may access this collected data using standard JMX viewers such as the JConsole utility comes with the JVM.
When having multiple partitions for the IMDG, the redo log value exposed is a sum of all the IMDG primary instances current replication redo log size. Without having the graph displayed, you will have to click the Refresh button manually to see the most updated values for each statistic.
The XSCFU diagnoses and starts the entire server, configures domains, offers dynamic reconfiguration, as well as detects and notifies various failures. Go to Seafile official download page and grab the last .Tar Linux archive release for your server architecture using wget command, then extract it to your home Seafile user created earlier and enter Seafile extracted directory. If you want to access Seafile Server from browser on standard HTTP port use the following init script which starts the server on port 80 (be aware that starting a service on ports below 1024 requires root privileges).
In fact, Dropbox raised $250 million at a $4 billion dollar valuation from investors with  Box Inc raising another $125 million in mid 2012.
Cloud providers such as the Microsoft Windows Azure Content Distribution Network (CDN) and the Amazon CDN offer affordable CDN services for serving static files and images to even streaming media to global audience. The Cloud is uniquely positioned to serve this process, with the ability to provision thousands of nodes, perform compute processes on each node and then tear down the nodes rapidly, thus saving huge amounts of resources.
All storage architectures comprise of a number of layers that work together to provide users with a seamless storage service. So for example, a user can use an App running on their desktop to perform basic functions such as  creating folders,  uploading and modifying files, as well as defining permissions and share data with other users. First of all, let’s look at some basic definitions and define the scope of this article.
The database is structured and relational, meaning that each item put up for sale on the site can be stored as an object and described by a number of attributes, including the name of the item, the item’s SKU number, category, price, description, etc. But what if we don’t know the types of attributes of the data we’re planning to store? Conventional solutions such as adding more computing resources or splitting up the database into shards are feasible, but do not fundamentally change how the data is being managed. Users who are not familiar with accessing digital information should also be given alternative mechanisms such as contact centers. Often, extending the pipeline for Big Data projects allows technology stakeholders time to get over the learning curve of adoption. Managing this challenge requires the appointment of senior stakeholders who can align priorities and provide the necessary visibility for forward movement. You can choose to deploy the Linux variant of your choice and then deploy Nginx on your OS, or you can choose to deploy the Nginx AMI Appliance developed by Nginx Inc., available in the Amazon Marketplace for am additional licensing fee. Make sure that you thoroughly test these settings before deploying into a production environment.
As we are primarily serving static files, we expect our workload profile to be less CPU intensive and more disk-process oriented. The ulimit -n command gives us the numerical figure that we can use to define the number of worker_connections.
Other vendor solutions include Google Apps for Government which supports the US Department of the Interior. A common cloud infrastructure runs the risk of exploitation that can be pervasive since all applications and tenants are connected via a common underlying infrastructure. When using the Mirror Service, the database performance and availability would not impact the application response time.
This means reacting in a timely fashion in case there is a need to intervene with its activity or related components that interacts with the Mirror Service (IMDG, Database).
It looks like Silicon Valley sees Cloud Storage services as a key piece in the future of cloud.
For each item that we load onto the database ,we can perform searches according to product categories and descriptions and even sort the products by price.
Let’s imagine that we have a service that crawls the Web for Real Estate websites in a particular region. HDFS maintains a number of High Availability features including replication and rebalancing of data across nodes. The consolidation of information without incurring any performance penalties requires the standardization to common platforms and technologies. Common services should be used in order to exploit economies of scale; applications and their underlying systems need to be tweaked and optimized. This is great and also efficient, because almost every object in the database will have the same types of attributes.
The objective is to build up an aggregated repository of information about properties for sale or rent that users can query. A major advantage of HDFS is location awareness, where nodes are scheduled to run computational processes for data that is situated close to the nodes, thereby reducing network traffic.
For example, we could have HTML files, media files (JPEGs and MPEGs) or even strings of characters. I’m here just to get ready for a job interview where the client has some remaining Oracle hardware.



Owncloud calendar error quimica
How do i see my icloud backup


Comments

  1. 20.06.2014 at 10:49:23


    The service's new on-demand I/ feature, which is designed.

    Author: Drakula2006
  2. 20.06.2014 at 19:19:52


    Popular cloud storage services, Just all cloud storage pricing like Flickr let you.

    Author: T_U_R_K_A_N_E
  3. 20.06.2014 at 23:28:24


    Provider should be chosen for your IT services most of us have entrusted our personal data because.

    Author: spychool
  4. 20.06.2014 at 14:41:37


    For years because it offers a truly unlimited storage Use ShareFile to upload & store.

    Author: ElektrA_CakO