SSD Is Finally Ready For Server Use

recuperation-de-donneesSolid-state drives (SSDs) have reached a point of becoming the dominant form of storage. Servers that use SSD storage run quicker than servers with traditional hard drives. This allows allocation of fewer servers for the same level of workload and huge savings. Also, it is likewise possible to use SSD’s in existing servers thus increasing their value.

Fostering functionality and saving cash are not the only triumphs for SSDs. For instance, video editing/scanning used to demand large hard drive setup. However, with SSD the work load is completed quickly without requiring large array of drive setup. Furthermore, database searches on SSD give better results and it is recommended for everyone to switch to SSD for their database use.

SSDs are also helping the cloud computing industry. From the beginning latency was an issue that faced all cloud providers.  On demand applications that requires a lot of  memory and GPU power are able to perform tasks at a much faster pace with the help of SSD.  Today HDDs are mainly useful for large storage or archiving .

One of the myths going around about SSDs is that it wears out much quickly compared to HDD. A flash drive normally wears out after five years of exceptionally heavy use, far much better in relation to the first four years of light use that MLC flash offered now. Most flash products will outlive the life expectancy of the entire system with quite a lot of room to save.

In addition to provide excellent I/O, SSD pushes the limit of most RAID cards. With SSDs RAID controllers are not able to provide the throughput especially with RAID5. Therefore, when deploying SSDs, it is easier to make use of RAID 10 or 1 mirror for data protection, which may be reached using host software.

It is crucial to factor in power savings that come with SSD usage. Conventional SSD save an average 10 W over a HDD, as well as additional savings when few servers are required to perform the same I/O intensive workload.

The world of hard disk drives is split in two: SAS based business drives, and SATA based consumer/storage drives. In the SSD space, the idea of the business drive is not more well defined. Server SSDs will divide in to two teams: NVMe drives with PCIe and SATA drives. As new technologies become mainstream, SSD pricing will continue to fall.

Another important change which will influence the SSD sector is the coming of SSDs with capacities beyond that of the greatest spinning drives. HDD sellers are running to the constraints of physics. Meanwhile, we have found the statement of 16 TB SSDs and SSD capability to develop to as much as 30 TB can be expected soon.

Latest Docker Release Offers New Features for IT Managers

large_v-transDocker continues to push its platform for assembling, installing, and running applications in Linux containers into more business data centers and now focusing more on security and high availability.

The most recent release of Docker, Docker 1.10, which came out before this month, adds lots of characteristics that are significant to IT managers that set up containerized applications in their data centers.

  • Automatic Rescheduling If There Is Server Failure

When a node in the cluster fails, Swarm, Docker’s software for handling server clusters to containerized applications, can now automatically reschedule containers. Because it’s not unaware which containers run on which node, if any of the nodes fail, it is going to schedule those containers to run on a stable node.

  • Advanced Clustering Features

In the past, if the node failed to connect to a cluster, the cluster would boot and launch without waiting for the node to connect.  However, in the new release the node will continue to connect to the cluster until a stable connection is made.  The system is designed to keep trying for specific numbers of time before giving up on the connection.

  • Individual Privileges for Host and Container

Many users raised the security issue where access privileges inside the contained can impact the access privileges outside the container.  Therefore, the new release splits the access between the inside and out of the container.  This ensures that any user inside the container is not able to access the permissions on the host level thus limiting the amount of damage a root user inside a contained can unleash on the host.

When programmers assemble programs that are containerized, they generally do not understand what network stack their programs will run on in the data center. The network is an abstraction. They would like to reference a particular network stack when IT managers set up the program, which new feature enables simple mapping involving the abstraction of the network of the program as defined from the programmer as well as the execution of the network interface in the data center.