Avamar Monitoring is handled through the Avamar Enterprise Manager, the Avamar Administrator and ConnectEMC.
The Enterprise Manager is a monitoring and reporting tool only. No configuration is able to be done, however it is possible to configure one Avamar system to act as the main Enterprise Manager server and point others at it. You can monitor multiple Avamar servers in this way. The URL is http://<avamar_server_or_IP/em.
From the EM you are able to review policy, backup job status, pull reports of performance and capacity as well as predict future capacity required. There is also an update tool that can be installed on a Windows system that will query EMC's online repositry, download patches and stage them for deployment. Clients with avamar client software installed can have the agent pushed from the Enterprise manager.
Avamar Administrator will display 72 hours up to 5000 lines. Its event management will display the last 5000 events from the last 24 hours, and the audit log is available to monitor user actions. THere is also a status indicator for critical services located at the bottom of every view in Avamar Administrator.
ConnectEMC integrates with ESRS. You configure an SMTP server and it will send daily summaries of product info to EMC. This is an optional program. Avamar Administrator will also provide several canned reports, and SQL queries can be run against the PostgreSQL database for custom reporting capability.
Working my way toward being a Full Stack Engineer. I work for a state University leading the Systems Team, looking to chart a future of the data center and how it looks to build into the cloud in a responsible and innovative manner. This blog is largely a place to stash things I pick up in daily work life and pursuit of knowledge.
Monday, November 4, 2013
Avamar Replication
Like Data Domain, Avamar replicates data that is deduplicated, so only new data is sent over the wire. THe tools to perform replication are Avamar Administrator or also the CLI command "replicate."
Standard replication is similar to DD's directory or MTree replication in that specific data is selected and replicated to the secondary Avamar. THis can be in a 1-to-1, many-to-one and bidirectional replication because the Avamar can be both source and destination of replicated data. Full Copy replication is similar to DD's full copy. It is also called "root-to-root" replication and does a complet copy of the source Avamar to the destination. This can only be a one-to-one copy and all data is overwritten on the destination.
Standard replication, when configured, creates a special REPLICATE domain on the destination server. Client backup, server state and config data are all replicated while incomplete backup jobs are not. Restore can take place directly from the destination Avamar without needing to restore to the original Avamar. It handles restore as a redirected restore from a client on the Avamar to a different client.
Full copy replication is typically only used in migrating from one Avamar to a new Avamar. When full copy is used, the Client ID's are migrated also and the clients all appear to be registered with the new Avamar automatically. The clients just need to be redirected to the new Avamar, and this can be accomplished using DNS.
Best practice dictates that a longer time-out window would be used during initialization to accommodate the large amount of data that needs to be transferred. While there are options to "include" and "exclude" clients from replication it is recommended to only use them with extreme caution. Once "include" settings are configured, no new clients will replicate without being specifically added to the list.
Replication is best configured to begin 1-2 hours after the backup commences to get the bulk of backup out of the way, and to end the replication before the blackout window begins. Ensure there is adequate bandwidth to complete the replication within 4 hours of starting - replication will consume 60-80% of available bandwidth.
Standard replication is similar to DD's directory or MTree replication in that specific data is selected and replicated to the secondary Avamar. THis can be in a 1-to-1, many-to-one and bidirectional replication because the Avamar can be both source and destination of replicated data. Full Copy replication is similar to DD's full copy. It is also called "root-to-root" replication and does a complet copy of the source Avamar to the destination. This can only be a one-to-one copy and all data is overwritten on the destination.
Standard replication, when configured, creates a special REPLICATE domain on the destination server. Client backup, server state and config data are all replicated while incomplete backup jobs are not. Restore can take place directly from the destination Avamar without needing to restore to the original Avamar. It handles restore as a redirected restore from a client on the Avamar to a different client.
Full copy replication is typically only used in migrating from one Avamar to a new Avamar. When full copy is used, the Client ID's are migrated also and the clients all appear to be registered with the new Avamar automatically. The clients just need to be redirected to the new Avamar, and this can be accomplished using DNS.
Best practice dictates that a longer time-out window would be used during initialization to accommodate the large amount of data that needs to be transferred. While there are options to "include" and "exclude" clients from replication it is recommended to only use them with extreme caution. Once "include" settings are configured, no new clients will replicate without being specifically added to the list.
Replication is best configured to begin 1-2 hours after the backup commences to get the bulk of backup out of the way, and to end the replication before the blackout window begins. Ensure there is adequate bandwidth to complete the replication within 4 hours of starting - replication will consume 60-80% of available bandwidth.
Avamar Integration
Avamar integrates with several products and technologies to provide seamless backup and restore capability and simplified management.
Avamar integrates with Data Domain to use the DD system as disk. DDBoost is built into the Avamar client and is used to direct backup to the DD instead of to the Avamar storage node. The Avamar will initiate the backup and manage the meta data, but the client will communicate directly with the Data Domain. One benefit of this is to provide faster backup and recovery of large, active databases, and another is to expand the Avamar system. The DD does not need to be dedicated to the Avamar server but can be used with other backup methods on the same DD. The Avamar maintenance routines will trigger similar routines on the DD, being seamless in management.
Avamar integrates with EMC Networker and would be used as a storage back end on Networker. You would use this if you needed the extended support for applications NetWorker provides.
Avamar integrates with VMWare on both guest backup and image-level backup. For guest backup, an agent is installed into the VM, and would typically be used for application and database support. Image-level backup is handled using VMWare's VADP (VMWare API for Data Protection). When Avamar initiates a backup job, it creates a temporary snapshot of the VM and uses an image proxy to reduce teh amount of CPU required to back up the VM. The image proxy also handles the source-based deduplication. Multiple proxies can be used to cut down on backup time. VMWare backups take advantage of VMWare's changed-block tracking, which tracks which blocks of the VMDK file have updated or are new and sends only those blocks instead of the entire file. This reduces backup time because Avamar doesn't need to scan the entire file for changes.
File-level restore is possible in Windows VMs with Avamar as of version 5.0, and Linux file-level restore is a feature available in Avamar 6.1. Multiple proxies can be used, and restores use the same proxy as the backup.
Avamar 6.1 integrates with HyperV, but uses no proxy. An agent is required to be installed on HyperV host. Windows cluster is also supported with the installation of a special Avamar Windows Cluster client that is tasked with backing up the shared storage. The install requires that the Windows Avamar client be installed and the systems register with the Avamar, after which teh cluster agent can be installed.
Avamar 6.1 also supports Veritas Cluster Services for Solaris 9 or 10 on VCS v5.0 and 5.1. It will support a 2-node cluster in active/active or active/passive modes only.
Avamar will back up NAS devices with the introduction of an NDMP Accelerator, which is a pass-through device storing no data. Avamar 5.0 and newer supports NDMP 4, and multiple NAS devices can be backed up to the same Avamar, or multiple Avamar devices can back up a single NAS device. An Avamar with more than 8GB of RAM will handle up to 8 streams, where smaller Avamar servers will handle 4. There is a limit of 10,000,000 files per backup to maintain performance standards. Avamar determines if this should be the initialization, or first backup or if it should be incremental, and if it is incremental the Avamar system merges the data into a single full backup.
Avamar integrates with Data Domain to use the DD system as disk. DDBoost is built into the Avamar client and is used to direct backup to the DD instead of to the Avamar storage node. The Avamar will initiate the backup and manage the meta data, but the client will communicate directly with the Data Domain. One benefit of this is to provide faster backup and recovery of large, active databases, and another is to expand the Avamar system. The DD does not need to be dedicated to the Avamar server but can be used with other backup methods on the same DD. The Avamar maintenance routines will trigger similar routines on the DD, being seamless in management.
Avamar integrates with EMC Networker and would be used as a storage back end on Networker. You would use this if you needed the extended support for applications NetWorker provides.
Avamar integrates with VMWare on both guest backup and image-level backup. For guest backup, an agent is installed into the VM, and would typically be used for application and database support. Image-level backup is handled using VMWare's VADP (VMWare API for Data Protection). When Avamar initiates a backup job, it creates a temporary snapshot of the VM and uses an image proxy to reduce teh amount of CPU required to back up the VM. The image proxy also handles the source-based deduplication. Multiple proxies can be used to cut down on backup time. VMWare backups take advantage of VMWare's changed-block tracking, which tracks which blocks of the VMDK file have updated or are new and sends only those blocks instead of the entire file. This reduces backup time because Avamar doesn't need to scan the entire file for changes.
File-level restore is possible in Windows VMs with Avamar as of version 5.0, and Linux file-level restore is a feature available in Avamar 6.1. Multiple proxies can be used, and restores use the same proxy as the backup.
Avamar 6.1 integrates with HyperV, but uses no proxy. An agent is required to be installed on HyperV host. Windows cluster is also supported with the installation of a special Avamar Windows Cluster client that is tasked with backing up the shared storage. The install requires that the Windows Avamar client be installed and the systems register with the Avamar, after which teh cluster agent can be installed.
Avamar 6.1 also supports Veritas Cluster Services for Solaris 9 or 10 on VCS v5.0 and 5.1. It will support a 2-node cluster in active/active or active/passive modes only.
Avamar will back up NAS devices with the introduction of an NDMP Accelerator, which is a pass-through device storing no data. Avamar 5.0 and newer supports NDMP 4, and multiple NAS devices can be backed up to the same Avamar, or multiple Avamar devices can back up a single NAS device. An Avamar with more than 8GB of RAM will handle up to 8 streams, where smaller Avamar servers will handle 4. There is a limit of 10,000,000 files per backup to maintain performance standards. Avamar determines if this should be the initialization, or first backup or if it should be incremental, and if it is incremental the Avamar system merges the data into a single full backup.
Avamar Desktop/Laptop Backup
Avamar comes with a desktop and laptop option (DTLT) which essentially gives end users the ability to restore their data from the Avamar server, and also gies some limited ability to determine what data gets backed up. There is a limit of 5000 clients per Avamar server, and users are normally LDAP-authenticated using Active Directory. NIS, Avamar local or a combination of these methods can also be used, however adding to the complexity of the system. The DTLT option can be enabled during client installation.
A group must be created when using the DTLT option, and it will consist of a dataset to determine the data that can be backed up, a schedule that will often be during the day, and a retention schedule. Users can be give the option to add to the dataset or modify the schedule to suit their specific needs if enabled by the administrator, but retention can not be overridden by users.
Some settings include the wildcard #USERDOCS# to select the user directory differences between Windows XP and Windows 7. The client can also be configured with "allow addition to source data" and "allow override group daily schedule" to accommodate the above mentioned configuration by users.
Both the client and server run the "dtlt" process for the DTLT backup, and it can be controlled using dpnctl utility to gather status, stop or start the dtlt service.
Backup can not be performed during the blackout window, so some adjusting of schedule may be required. In the event that DTLT backup needs to happen during conflicting widows with the server backup jobs, multiple Avamar servers can be installed with different windows of time for backup, maintenance and blackout.
Users browse to the file or folder they would like restored from within the dtlt client which can be accessed both through the start menu and also through the tray icon.
It is recommended to bring clients on in small groups to minimize impact on overall environment during initialization period.
A group must be created when using the DTLT option, and it will consist of a dataset to determine the data that can be backed up, a schedule that will often be during the day, and a retention schedule. Users can be give the option to add to the dataset or modify the schedule to suit their specific needs if enabled by the administrator, but retention can not be overridden by users.
Some settings include the wildcard #USERDOCS# to select the user directory differences between Windows XP and Windows 7. The client can also be configured with "allow addition to source data" and "allow override group daily schedule" to accommodate the above mentioned configuration by users.
Both the client and server run the "dtlt" process for the DTLT backup, and it can be controlled using dpnctl utility to gather status, stop or start the dtlt service.
Backup can not be performed during the blackout window, so some adjusting of schedule may be required. In the event that DTLT backup needs to happen during conflicting widows with the server backup jobs, multiple Avamar servers can be installed with different windows of time for backup, maintenance and blackout.
Users browse to the file or folder they would like restored from within the dtlt client which can be accessed both through the start menu and also through the tray icon.
It is recommended to bring clients on in small groups to minimize impact on overall environment during initialization period.
Avamar Backup and Restore
Avamar backup and restore requires client software to be loaded on each system being backed up dunless a backup proxy is being used, although that is not the common deployment.n There is an operating-system specific base package for Windows, Mac, UNIX, etc and also application plugins that interface software such as databases to avtar.
Avamar domains are administrative units for organizing backup clients. Users can be added and given permission to administer domains or individual clients as well as the root domain that gives access to all backup jobs. Avamar client domains do not segregate data, only administration.
Users can be local or LDAP-integrated (such as Active Directory).
Scheduled backups run according to a group specification. All new clients are placed automatically in the default group. This behavior can not be modified, although the parameters such as time, retention and settings can be modified on the default group. Group members are clients in the Avamar system. A group policy contains the data set information (what will be backed up) as well as the schedule for running the backup and notifications, etc and the retention by number of days or weeks. Indefinite retention can also be specified, but caution must be taken to avoid data bloat.
On-demand backups are performed by browsing the client and selecting the data to back up.
Restore is initiated either from the Avamar Administrator or through the web client for desktop/laptop backup. Restoration can take place from one client to another as long as both systems are registered with the same Avamar server. File and folder versions are also maintained and can be selected at the time of restore.
The Avamar Activity Monitor is found in the Avamar Administrator, and the Avamar Client tray application will show the last 72 hours of activity up to 5000 lines.
A partial backup is when a backup job exceeds the allowable backup window but does not complete. The job will be marked "partial" and is stored on the Avamar server although it is not viewable in any of the interfaces. The job will run the next scheduled time and will run the entire backup, however because the partial exists to seed the deduplication the backup will typically take less time. Partial backups are stored by default for 7 days.
Avamar also provides file shredding through the securedelete utility. When a backup is shredded, all data for the entire backup is deleted. It works on the local Avamar server only, so remote replicated data must also be deleted as well as any checkpoints that were created need to be rolled back.
Avamar domains are administrative units for organizing backup clients. Users can be added and given permission to administer domains or individual clients as well as the root domain that gives access to all backup jobs. Avamar client domains do not segregate data, only administration.
Users can be local or LDAP-integrated (such as Active Directory).
Scheduled backups run according to a group specification. All new clients are placed automatically in the default group. This behavior can not be modified, although the parameters such as time, retention and settings can be modified on the default group. Group members are clients in the Avamar system. A group policy contains the data set information (what will be backed up) as well as the schedule for running the backup and notifications, etc and the retention by number of days or weeks. Indefinite retention can also be specified, but caution must be taken to avoid data bloat.
On-demand backups are performed by browsing the client and selecting the data to back up.
Restore is initiated either from the Avamar Administrator or through the web client for desktop/laptop backup. Restoration can take place from one client to another as long as both systems are registered with the same Avamar server. File and folder versions are also maintained and can be selected at the time of restore.
The Avamar Activity Monitor is found in the Avamar Administrator, and the Avamar Client tray application will show the last 72 hours of activity up to 5000 lines.
A partial backup is when a backup job exceeds the allowable backup window but does not complete. The job will be marked "partial" and is stored on the Avamar server although it is not viewable in any of the interfaces. The job will run the next scheduled time and will run the entire backup, however because the partial exists to seed the deduplication the backup will typically take less time. Partial backups are stored by default for 7 days.
Avamar also provides file shredding through the securedelete utility. When a backup is shredded, all data for the entire backup is deleted. It works on the local Avamar server only, so remote replicated data must also be deleted as well as any checkpoints that were created need to be rolled back.
Sunday, November 3, 2013
Avamar Processes and Backup Processes
Avamar utility nodes run 3 main processes. The mcs process is the Management Console Server and is the main component in the Avamar system. It manages, maintains and schedules backup jobs. The ems, or Enterprise Management Server presents the management console and cron is the standard linux scheduler. Replication is scheduled using cron instead of mcs.
Storage nodes have one main process, gsan, which takes in data, writes data to disk and presents it for restore.
An Avamar server can have 27 simultaneous client connections per node, always reserving one for restore.
On the client being backed up, the avagent process listens for backup requests and communicates with the utility node and avtar. Avtar is the backup process that takes the command from the agent, runs the backup process and communicates directly with the storage node. There is also a small avscc process that runs on a client, which produces a taskbar icon and gives a small set of function to Windows clients.
There are 2 client plug-ins for Avamar - file system plug-ins that scan file systems and backs them up, and database plug-ins which directly handle databases.
Avamar utility nodes listen on several tcp ports. Port 8443 is the web interface, port 7778 is where the admin console, both GUI and CLI connect to the utility node. Avagent communicates on port 28001/28002. The backup process, avtar, connects to the gsan service on the storage node over 27000 for normal backup and on 29000 for encrypted backup. Utility nodes will communicate outbound to the admin console on 7778 for alerting. You can also run SQL queries against the database (MCDB) on port 5555.
Deduplication on an Avamar system is a little different than on a Data Domain. It not only stores unique data only once, it sends that data only once across the network. When the backup request is accepted form teh utility node, the avtar process scans the file cache for new or updated files. It will skip anything not modified, saving time. Once it identifies new or changed data, it runs it through a process of "sticky-byte factoring."
Sticky-byte factoring will break the file into objects (or chunks) of data between 1-64 KB and will always produce the same result on unchanged data. If data has changed, it quickly locates the changes and resynchronizes it with the unchanged data. The average object size is 24KB.
After factoring takes place, the data is compressed using standard compression, reducing the data size by 30-50% so that chunks are typically 12-16 B in size. (NOTE: Office 2007 files are actually compressed groups of files. Avamar 4.0 and later will uncompress Office files for deduplication, but this is not a backward-compatible feature to pre-4.0 clients. These will not restore correctly) The compressed chunks are then run through a hashing process to produce "atomic hashes" which are compared to the hash cache to see if they have been sent to the Avamar server yet. If they have not been sent to the Avamar server, the hash is sent to Avamar, where it scans a hash cache to see if it has stored the chunk already. If it has stored the chunk, it saves the hash only. If it is new, unique data, the data and the hash are sent and stored on the Avamar server.
SHA-1 hashes are created for all files, which are 20B in size. The hash is created from the compressed data.
Data is stored with the hash, and part of the hash value is used as an address to locate the data.
Data storage is performed in a hierarchical method of hashes and composite hashes. The data is run through variable-block deduplication to create "atomics," which are segments of data that are reduced in size using compression. The compressed objects are then hashed, and the hashes are grouped into composites. The composites are then hashed and grouped into composite-composites, which are hashed to create the root hash. The root hash identifies the backup job.
Data objects are stored in atomic stripes along with parity information. Composites and composite-composites are stored in a different stripe, and root hashes are stored in an "accounts stripe." RAIN systems store RAIN parity information in another, separate stripe as well.
Storage nodes have one main process, gsan, which takes in data, writes data to disk and presents it for restore.
An Avamar server can have 27 simultaneous client connections per node, always reserving one for restore.
On the client being backed up, the avagent process listens for backup requests and communicates with the utility node and avtar. Avtar is the backup process that takes the command from the agent, runs the backup process and communicates directly with the storage node. There is also a small avscc process that runs on a client, which produces a taskbar icon and gives a small set of function to Windows clients.
There are 2 client plug-ins for Avamar - file system plug-ins that scan file systems and backs them up, and database plug-ins which directly handle databases.
Avamar utility nodes listen on several tcp ports. Port 8443 is the web interface, port 7778 is where the admin console, both GUI and CLI connect to the utility node. Avagent communicates on port 28001/28002. The backup process, avtar, connects to the gsan service on the storage node over 27000 for normal backup and on 29000 for encrypted backup. Utility nodes will communicate outbound to the admin console on 7778 for alerting. You can also run SQL queries against the database (MCDB) on port 5555.
Deduplication on an Avamar system is a little different than on a Data Domain. It not only stores unique data only once, it sends that data only once across the network. When the backup request is accepted form teh utility node, the avtar process scans the file cache for new or updated files. It will skip anything not modified, saving time. Once it identifies new or changed data, it runs it through a process of "sticky-byte factoring."
Sticky-byte factoring will break the file into objects (or chunks) of data between 1-64 KB and will always produce the same result on unchanged data. If data has changed, it quickly locates the changes and resynchronizes it with the unchanged data. The average object size is 24KB.
After factoring takes place, the data is compressed using standard compression, reducing the data size by 30-50% so that chunks are typically 12-16 B in size. (NOTE: Office 2007 files are actually compressed groups of files. Avamar 4.0 and later will uncompress Office files for deduplication, but this is not a backward-compatible feature to pre-4.0 clients. These will not restore correctly) The compressed chunks are then run through a hashing process to produce "atomic hashes" which are compared to the hash cache to see if they have been sent to the Avamar server yet. If they have not been sent to the Avamar server, the hash is sent to Avamar, where it scans a hash cache to see if it has stored the chunk already. If it has stored the chunk, it saves the hash only. If it is new, unique data, the data and the hash are sent and stored on the Avamar server.
SHA-1 hashes are created for all files, which are 20B in size. The hash is created from the compressed data.
Data is stored with the hash, and part of the hash value is used as an address to locate the data.
Data storage is performed in a hierarchical method of hashes and composite hashes. The data is run through variable-block deduplication to create "atomics," which are segments of data that are reduced in size using compression. The compressed objects are then hashed, and the hashes are grouped into composites. The composites are then hashed and grouped into composite-composites, which are hashed to create the root hash. The root hash identifies the backup job.
Data objects are stored in atomic stripes along with parity information. Composites and composite-composites are stored in a different stripe, and root hashes are stored in an "accounts stripe." RAIN systems store RAIN parity information in another, separate stripe as well.
Saturday, November 2, 2013
Avamar System Components
The Avamar System is comprised of 3 components: the server, the administrator and the clients.
Servers are comprised of a utility node, which handles the internal server processes and services, holds the administration server, cron jobs, external authentication mechanisms, NTP and web access to the Avamar system. Storage nodes are dedicated to writing data to disk and have large disk capacities. A spare node can be implemented to speed recoverability from a node failure and must meet the other node capacities. Each node requires its own network connections.
Systematic fault tolerance is provided in Avamar systems using RAID (1 or 6) for disk failure, RAIN to protect against node failure, replication to protect against server loss, high availability uplink and redundant switch to protect against network loss and checkpoints to protect against operational failure. Checkpoints are read-only snapshots of the Avamar system and are hard links to all stripes in the system.
Avamar systems can come as single-node systems (non-RAIN) or multiple-node systems utilizing RAIN. The common denotation for Avamar systems is #_of_utility_nodes x #_of_storage_nodes + spare_node, or 1x3x1 for a system with 3 storage nodes. The minimum RAIN config is 1x3+1 and the maximum is 1x16+1 - however the spare node is not required if Premium Support is purchased.
Avamar 6.1 runs SLES linux for the Gen4 Avamar. Gen3 and previous run RHEL and is support for migrations up to ver 5.x. These all support multiple CPU.
Node sizes:
Gen4: 1.3, 2.6, 3.9. 7.8 TB
Gen3: 1.0, 2.0, 3.3 TB
AVE: 0.5, 1, 2 TB (meant for smaller installations)
Avamar Data Store is the appliance version, AVE is Avamar Virtual Edition designed as an appliance to run on VMWare ESXi servers.
Avamar Data Store in Gen4 comes as single-node or multi-node configuration. It si EMC built and certified, and must be installed by EMC trained professionals. Single node comes licensed in 1.3, 2.6, 3.9 or 7.8 TB capacity, and the multi-node is licensed in 3.9 and 7.8 TB.
For multi-node systems, 1x3+1 is the minimum (unless Premium Support is purchased, then 1x3) and 18 nodes is max (1x16+1). Previous generations supported 1x2 deployments, but that is no longer a supported configuration.
Each node utilizes a single IP address with multiple network connections. Gen2, however, does not support multiple network connections. Each node requires 2 Ethernet connections to provide HA Uplink to the network core.
A single-node deployment is not expandable. TO get more capacity, the appliance must be migrated to a multi-node system. In a multi-node config, a storage node can be configured as a utility node if required. Utility nodes have 8 network connections, while storage nodes proved 4.
Avamare Virtual Edition (AVE) runs on ESX environment which are not supplied by EMC, and the ESX host must meet the I/O requirements tested for by EMC's benchmark utility that is run on the ESX host for 24 hours before deployment.
Servers are comprised of a utility node, which handles the internal server processes and services, holds the administration server, cron jobs, external authentication mechanisms, NTP and web access to the Avamar system. Storage nodes are dedicated to writing data to disk and have large disk capacities. A spare node can be implemented to speed recoverability from a node failure and must meet the other node capacities. Each node requires its own network connections.
Systematic fault tolerance is provided in Avamar systems using RAID (1 or 6) for disk failure, RAIN to protect against node failure, replication to protect against server loss, high availability uplink and redundant switch to protect against network loss and checkpoints to protect against operational failure. Checkpoints are read-only snapshots of the Avamar system and are hard links to all stripes in the system.
Avamar systems can come as single-node systems (non-RAIN) or multiple-node systems utilizing RAIN. The common denotation for Avamar systems is #_of_utility_nodes x #_of_storage_nodes + spare_node, or 1x3x1 for a system with 3 storage nodes. The minimum RAIN config is 1x3+1 and the maximum is 1x16+1 - however the spare node is not required if Premium Support is purchased.
Avamar 6.1 runs SLES linux for the Gen4 Avamar. Gen3 and previous run RHEL and is support for migrations up to ver 5.x. These all support multiple CPU.
Node sizes:
Gen4: 1.3, 2.6, 3.9. 7.8 TB
Gen3: 1.0, 2.0, 3.3 TB
AVE: 0.5, 1, 2 TB (meant for smaller installations)
Avamar Data Store is the appliance version, AVE is Avamar Virtual Edition designed as an appliance to run on VMWare ESXi servers.
Avamar Data Store in Gen4 comes as single-node or multi-node configuration. It si EMC built and certified, and must be installed by EMC trained professionals. Single node comes licensed in 1.3, 2.6, 3.9 or 7.8 TB capacity, and the multi-node is licensed in 3.9 and 7.8 TB.
For multi-node systems, 1x3+1 is the minimum (unless Premium Support is purchased, then 1x3) and 18 nodes is max (1x16+1). Previous generations supported 1x2 deployments, but that is no longer a supported configuration.
Each node utilizes a single IP address with multiple network connections. Gen2, however, does not support multiple network connections. Each node requires 2 Ethernet connections to provide HA Uplink to the network core.
A single-node deployment is not expandable. TO get more capacity, the appliance must be migrated to a multi-node system. In a multi-node config, a storage node can be configured as a utility node if required. Utility nodes have 8 network connections, while storage nodes proved 4.
Avamare Virtual Edition (AVE) runs on ESX environment which are not supplied by EMC, and the ESX host must meet the I/O requirements tested for by EMC's benchmark utility that is run on the ESX host for 24 hours before deployment.
Subscribe to:
Posts (Atom)