Search This Blog

Wednesday, November 16, 2016

Cloning a Ceph client auth key

 I don't recall any reason to do this other than using the same user and auth key to authenticate in different Ceph clusters, like in a multi-backend solution, or just because things get messy when you are not using a default configuration.

Sometimes, things get easy when we use the same user and auth key on both clusters for services to connect to, so let's see some background commands for managing users, keys, and permissions:

Create a new user and auth token (cinder client example):

root@ceph-admin:~# ceph auth get-or-create client.jerry
client.jerry
key: AQAZT05WoQuzJxAAX5BKxCbPf93CwihuHo27VQ==

So as you see the key is not a parameter, in a different server this will produce a completely different key.
Just to check, print the complete list of keys:

root@ceph-admin:~# ceph auth list
installed auth entries:

osd.0
key: AQCvCbtToC6MDhAATtuT70Sl+DymPCfDSsyV4w==
caps: [mon] allow profile osd
caps: [osd] allow *
osd.1
key: AQC4CbtTCFJBChAAVq5spj0ff4eHZICxIOVZeA==
caps: [mon] allow profile osd
caps: [osd] allow *
client.admin
key: AQBHCbtT6APDHhAA5W00cBchwkQjh3dkKsyPjw==
caps: [mds] allow
caps: [mon] allow *
caps: [osd] allow *
client.jerry
key: AQAZT05WoQuzJxAAX5BKxCbPf93CwihuHo27VQ==

Or print a user’s authentication key to standard output, execute the command in the following format

ceph auth print-key {TYPE}.{ID}

root@ceph-admin:~# ceph auth print-key client.jerry
AQAZT05WoQuzJxAAX5BKxCbPf93CwihuHo27VQ==

To change this in order to match with others, we need to update their keys and/or their capabilities, the import command is for this, remember their keys and their capabilities will update on existing users and create new ones, use the following format:

ceph auth import -i /path/to/keyring

The keyring file needs to be in this format, if not, the command will not work and the part of the work, it will just hang.

root@ceph-admin:~# cat jerry.key
[client.jerry]
key = AQAMP01WS8i8ERAAPspjwMzUm4SL00n+WppM6A==

Now we can update the auth key for the user jerry:

root@ceph-admin:~# auth import -i ./jerry.key
imported keyring

List again.

root@ceph-admin:~# ceph auth list
installed auth entries:

osd.0
key: AQCvCbtToC6MDhAATtuT70Sl+DymPCfDSsyV4w==
caps: [mon] allow profile osd
caps: [osd] allow *
osd.1
key: AQC4CbtTCFJBChAAVq5spj0ff4eHZICxIOVZeA==
caps: [mon] allow profile osd
caps: [osd] allow *
client.admin
key: AQBHCbtT6APDHhAA5W00cBchwkQjh3dkKsyPjw==
caps: [mds] allow
caps: [mon] allow *
caps: [osd] allow *
client.jerry
key: AQAMP01WS8i8ERAAPspjwMzUm4SL00n+WppM6A==

Done, I will continue posting these little helping tricks until the last post about multi-backend ceph is out.

Friday, March 18, 2016

Export instance from OpenStack with Ceph/rbd backend

Suppose that you want to migrate an instance from different infrastructures or you want to hand over an instance information to a client, so you need to recover (export) the instance volumes information.


Step 1: Get the instance UUID.

root@ceph-admin:~# nova list | grep InstanceToExport | 2bdda36c-f0dd-4fa5-bb8b-3df346b17002 | InstanceToExport | SHUTOFF | - | Shutdown | vlan8=192.168.255.53; vlan1837=10.20.37.7; vlan1829=10.20.23.53 |


UUID from the instance is returned here: 2bdda36c-f0dd-4fa5-bb8b-3df346b17002


Step 2: Get the volume UUID from the instance, using the UUID returned in step 1

root@ceph-admin:~# cinder list | grep 2bdda36c-f0dd-4fa5-bb8b-3df346b17002 | fdb279c5-24bb-45d7-a86a-a33f4c285b5a | in-use | None | 100 | None | true | 2bdda36c-f0dd-4fa5-bb8b-3df346b17002 |


UUID from the volume is returned here: fdb279c5-24bb-45d7-a86a-a33f4c285b5a


Step 3: Search from the volume on the pool in ceph, in my case this volume is stored in the cinder-volumes pool

root@ceph-admin:~# rbd --pool cinder-volumes ls | grep fdb279c5-24bb-45d7-a86a-a33f4c285b5a volume-fdb279c5-24bb-45d7-a86a-a33f4c285b5a


By this time you have the volume name in the pool: volume-fdb279c5-24bb-45d7-a86a-a33f4c285b5a


Step 4: Export the volume.

root@ceph-admin:~# rbd export cinder-volumes/volume-fdb279c5-24bb-45d7-a86a-a33f4c285b5a ./InstanceToExport.img Exporting image: 100% complete...done.
root@ceph-admin:~# ll -ltrh *.img -rw-r--r-- 1 root root 100G Feb 17 17:09 InstanceToExport.img


Step 5: Compress, so you can scp or rsync faster, this step is optional but highly recommended.


root@ceph-admin:~# gzip -9 InstanceToExport.img root@ceph-admin:~# ll *.gz -rw-r--r-- 1 root root 1.2G Feb 17 18:02 InstanceToExport.img.gz


Step 6: Checksum, to be sure that you don't have any problem copying

root@ceph-admin:~# md5sum InstanceToExport.img >InstanceToExport.img.md5 root@ceph-admin:~# md5sum InstanceToExport.img.gz >InstanceToExport.img.gz.md5 root@ceph-admin:~# cat InstanceToExport.img.md5 5504cdf2261556135811fdd5787b33a5 InstanceToExport.img root@ceph-admin:~# cat InstanceToExport.img.gz.md5 8a76c28d404f44cc43872e69c9965cd2 InstanceToExport.img.gz


Note: the md5sum InstanceToExport.img is going to take a lot! in my volume (100G) like 20 minutes, omit it if you want.

Saturday, March 12, 2016

Testing juju environment inside LXC container

I think we pass the part about what juju is and how it works, so I'll post direct commands and configurations of how to get the environment working inside an LXC container, created just for juju, not the local configuration that creates an LXC container, in other words, out host server does not have any juju package.

Some links to read in case you need more info, or you can post a question.

Guest environment:

root@spyder:~# cat /etc/issue Ubuntu 15.10 root@spyder:~# uname -a Linux spyder 4.2.0-18-generic #22-Ubuntu SMP Fri Nov 6 18:25:50 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux root@spyder:~# dpkg-query -W | grep lxc liblxc1 1.1.5-0ubuntu5~ubuntu15.10.1~ppa1 lxc 1.1.5-0ubuntu5~ubuntu15.10.1~ppa1 lxc-templates 2.0.0~beta2-0ubuntu2~ubuntu15.10.1~ppa1 lxcfs 2.0.0~rc3-0ubuntu1~ubuntu15.10.1~ppa1 lxctl 0.3.1+debian-3 python3-lxc 1.1.5-0ubuntu5~ubuntu15.10.1~ppa1


In my host server, I have two lxcbr interfaces, but for the juju container I going to use lxcbr0, the container will have complete internet access but to access internal apps we are going to need DNAT iptables rules (at the end I'll post the iptables configuration).

root@spyder:~# ifconfig lxcbr0 | grep inet inet addr:10.0.2.1 Bcast:0.0.0.0 Mask:255.255.255.0 inet6 addr: fe80::58b8:36ff:fe6e:4e57/64 Scope:Link

Original lxc-ls output

root@spyder:~# lxc-ls --fancy NAME STATE IPV4 IPV6 GROUPS AUTOSTART ------------------------------------------------------------------- ceph-admin RUNNING 10.0.2.11, 10.0.3.84 - - YES ceph01 RUNNING 10.0.2.85, 10.0.3.85 - - YES ceph02 RUNNING 10.0.2.103, 10.0.3.86 - - YES ceph03 RUNNING 10.0.2.156, 10.0.3.87 - - YES


Now to the fun part, getting things working :)

First, create the juju container.

root@spyder:~# lxc-create -t download -n juju -- --dist ubuntu --release trusty --arch amd64 Using image from local cache Unpacking the rootfs --- You just created an Ubuntu container (release=trusty, arch=amd64, variant=default) To enable sshd, run: apt-get install openssh-server For security reason, container images ship without user accounts and without a root password. Use lxc-attach or chroot directly into the rootfs to set a root password or create user accounts.


Start the container


root@spyder:~# lxc-start -n juju -d --logfile juju.log root@spyder:~# lxc-ls --fancy NAME STATE IPV4 IPV6 GROUPS AUTOSTART ------------------------------------------------------------------- ceph-admin RUNNING 10.0.2.11, 10.0.3.84 - - YES ceph01 RUNNING 10.0.2.85, 10.0.3.85 - - YES ceph02 RUNNING 10.0.2.103, 10.0.3.86 - - YES ceph03 RUNNING 10.0.2.156, 10.0.3.87 - - YES juju RUNNING 10.0.2.110 - - YES


Now let's attach to the container (in this part you need to install openssh-server, set passwords to users, etc.)


root@spyder:~# lxc-attach --name juju root@juju:~# ip a 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 22: eth0@if23: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 00:16:3e:53:f1:f5 brd ff:ff:ff:ff:ff:ff inet 10.0.2.110/24 brd 10.0.2.255 scope global eth0 valid_lft forever preferred_lft forever inet6 fe80::216:3eff:fe53:f1f5/64 scope link valid_lft forever preferred_lft forever


To install Juju, you simply need to grab the latest juju-core package from the PPA:


root@juju:~# apt-get install python-software-properties root@juju:~# apt-get install software-properties-common root@juju:~# add-apt-repository ppa:juju/stable Stable release of Juju for Ubuntu 12.04 and above. More info: https://launchpad.net/~juju/+archive/ubuntu/stable Press [ENTER] to continue or ctrl-c to cancel adding it gpg: keyring `/tmp/tmpyqs7twek/secring.gpg' created gpg: keyring `/tmp/tmpyqs7twek/pubring.gpg' created gpg: requesting key C8068B11 from hkp server keyserver.ubuntu.com gpg: /tmp/tmpyqs7twek/trustdb.gpg: trustdb created gpg: key C8068B11: public key "Launchpad Ensemble PPA" imported gpg: Total number processed: 1 gpg: imported: 1 (RSA: 1) OK root@juju:~# apt-get update root@juju:~# apt-get upgrade root@juju:~# apt-get install juju-quickstart juju-core


Juju needs to be configured to use your cloud provider. This is done via the following file:


$HOME/.juju/environments.yaml


Juju can automatically generate the file in this way:


ubuntu@juju:~$ juju generate-config


There are different types of clouds providers, check the environments.yaml for more info, the one important for us is the manual provider because we are going to deploy on manually on our same machine (LXC container in this case), so I deleted all the other information:

ubuntu@juju:~$ cat /home/ubuntu/.juju/environments.yaml default: manual environments: manual: type: manual # bootstrap-host holds the host name of the machine where the # bootstrap machine agent will be started. bootstrap-host: juju # bootstrap-user specifies the user to authenticate as when # connecting to the bootstrap machine. It defaults to # the current user. bootstrap-user: ubuntu # storage-listen-ip specifies the IP address that the # bootstrap machine's Juju storage server will listen # on. By default, storage will be served on all # network interfaces. # storage-listen-ip: # storage-port specifes the TCP port that the # bootstrap machine's Juju storage server will listen # on. It defaults to 8040 # storage-port: 8040 # Whether or not to refresh the list of available updates for an # OS. The default option of true is recommended for use in # production systems. # # enable-os-refresh-update: true # Whether or not to perform OS upgrades when machines are # provisioned. The default option of false is set so that Juju # does not subsume any other way the system might be # maintained. # # enable-os-upgrade: false


The first step is to create a bootstrap environment. This is a cloud instance that Juju will use to deploy and manage services. It will be created according to the configuration you have provided, and your public SSH key will be uploaded automatically so that Juju can communicate securely with the bootstrap instance.


ubuntu@juju:~$ juju switch manual manual -> manual ubuntu@juju:~$ juju bootstrap WARNING ignoring environments.yaml: using bootstrap config in file "/home/ubuntu/.juju/environments/manual.jenv" Bootstrapping environment "manual" Starting new instance for initial state server Installing Juju agent on bootstrap instance Logging to /var/log/cloud-init-output.log on remote host Running apt-get update Installing package: curl Installing package: cpu-checker Installing package: bridge-utils Installing package: rsyslog-gnutls Installing package: cloud-utils Installing package: cloud-image-utils Installing package: tmux Fetching tools: curl -sSfw 'tools from %{url_effective} downloaded: HTTP %{http_code}; time %{time_total}s; size %{size_download} bytes; speed %{speed_download} bytes/s ' --retry 10 -o $bin/tools. tar.gz Bootstrapping Juju machine agent Starting Juju machine agent (jujud-machine-0) Bootstrap agent installed manual -> manual Waiting for API to become available Waiting for API to become available Bootstrap complete


You can see that the bridge-utils package is installed, but inside an lxc container you are not going to use it and it can pass through an outside bridge to the juju container


root@juju:~# apt-get purge bridge-utils


If you have any problem on the bootstrap delete conf files and start over, and I mean problems like the nasty “ERROR machine is already provisioned” when the machine is not really provisioned.


root@juju:~# apt-get purge lxc* root@juju:~# apt-get purge juju* root@juju:~# rm -rf /etc/init/juju* root@juju:~# rm -rf /var/lib/juju


If not, just continue, if everything is well right, you will see an output similar to this one, this means that the juju service is running on machine 0 (same LXC container).


ubuntu@juju:~$ juju status environment: manual machines: "0": agent-state: started agent-version: 1.25.3 dns-name: juju instance-id: 'manual:' series: trusty hardware: arch=amd64 cpu-cores=2 mem=3000M state-server-member-status: has-vote services: {}


Assuming it returns successfully, we can now deploy some services and explore the basic operations of Juju, next, you simply need to deploy our first charm (juju-gui) and expose it, this charm makes it easy to deploy a Juju GUI into an existing environment.


ubuntu@juju:~$ juju deploy juju-gui --to 0
ubuntu@juju:~$ juju expose juju-gui
........
........ after a couple of minutes, juju needs to download several packages and configure all, so better use "watch juju status", untill you see and output similar to this.
........
ubuntu@juju:~$ juju status
environment: manual
machines:
  "0":
    agent-state: started
    agent-version: 1.25.3
    dns-name: juju
    instance-id: 'manual:'
    series: trusty
    hardware: arch=amd64 cpu-cores=2 mem=3000M
    state-server-member-status: has-vote
services:
  juju-gui:
    charm: cs:trusty/juju-gui-51
    exposed: true
    service-status:
      current: unknown
      since: 12 Mar 2016 09:12:45Z
    units:
      juju-gui/0:
        workload-status:
          current: unknown
          since: 12 Mar 2016 09:12:45Z
        agent-status:
          current: idle
          since: 12 Mar 2016 09:17:48Z
          version: 1.25.3
        agent-state: started
        agent-version: 1.25.3
        machine: "0"
        open-ports:
        - 80/tcp
        - 443/tcp
        public-address: juju


Now the juju-gui is installed, configured, and exposed over ports 80 and 443, but remember, this is inside the LXC container, so we can't access the GUI unless we NAT some ports from our host server.


root@spyder:~# iptables -t nat -A PREROUTING -p tcp -d 10.0.1.139 --dport 443 -j DNAT --to-destination 10.0.2.110:443 root@spyder:~# iptables -t nat -A PREROUTING -p tcp -d 10.0.1.139 --dport 80 -j DNAT --to-destination 10.0.2.110:80


And boom!!!! now we can access juju-gui, the login info is on this file:


ubuntu@juju:~$ cat .juju/environments/manual.jenv user: admin password: 0d4e465c15d5880d0c348a921489a9f1 .......

Thursday, March 10, 2016

Cinder Volume Transfer

Let's assume you want to change ownership of volume from Tenant_A to Tenant_B.

Step 1: Tenant A will initiate an Ownership Transfer which will enable another tenant to take ownership of it.

$ source openrc Tenant_A Tenant_A $ cinder transfer-create [volume_id]

An Authentication Key and a Transfer ID are returned here.

Step 2: Tenant B needs to accept the Transfer using the Transfer ID and The Authentication Key generated above.

$ source openrc Tenant_B Tenant_B $ cinder transfer-accept [transfer_id] [auth_key]

You should now see that volume associated with Tenant_B

Thursday, March 12, 2015

The real problem behind highly transactional applications

An architecture trying to respond to at least 10000 concurrent connections per second, is trying to solve the C10K problem, even if this is so last decade is still breaking servers, architectures, and configurations, giving sysadmins real headaches and not always because of real connections, also for basic DDoS attacks (pretty much is the same concept: lots and lots of new connections to the same service).

Today, because of the need of connecting and sharing resources across infrastructures and also the need to implement high availability in solutions many companies have implemented SOA or multi-layer solutions when these solutions can become handy, it could also be a problem if are not implemented in the correct way: without the proper testing set, and sometimes people don't even know it if the architecture implemented is going to respond in the correct way or even the way that the developers team are planning. this problem does not only affect to wrong configured architectures but also solutions not properly planned to grow.

The problem usually is errors in coding and validation on every layer of the application solution; proprietary code, web server, application server, DBMS, and so on, if applications were coded properly security and bug-hunting guys would be unemployed by now.

So what are you going to see in a highly transactional server with a misconfiguration problem?

  • Lots of TIME_WAIT connections.
  • Lots of CLOSE_WAIT connections.
  • Possibly memory problems.
  • Possibly the system swapping.
  • Really Slow server.
  • Many timeouts in the application log.
  • The application became unreachable.
  • We can't create new connections to the server, even ssh ones.
  • ... Worst case scenario, dead servers.

But service restart, reboot and kill will not solve all the problems, nor the operating system or the kernel are there to solve all the problems, the kernel work is to handle the control plane and in a general and multipurpose way, if you take only the kernel tuning approach, the kernel is going to be part of the problem, and you are going to be far far away to solve the problem.

The kernel has a known way to work and knows O(n^2) complexity, with every new connection the kernel has to walk down all the current processes to figure out which thread should handle the packet or if we talk of connection polls the process is the same, each packet had to walk a list of sockets.





Hight level Kernel diagram: layers and intercommunication (1).



Even if you take the complete tuning approach, maybe the application is going to work, but not always, you only are going to get stability, but not the real solution, the correct way to handle and solve the C10K problem, even more, C10M is letting the kernel solve de control plane and applications handle the data plane and/or write software to bypass the stack, such as DPDK (2), this is pretty much like if we're talking about an exokernel (3), using an end-to-end principle.





Common Kernel V/S ExoKernel (3).



To build usable and scalable applications to support 10 million concurrent connections per second (and more), we need to solve other kinds of problems first.

  • Packet scalability.
  • Multi-core scalability.
  • Memory scalability.

So the real problem is.... knowledge, lots of developers know how to code client/server applications, but less than 50% of them know how the TCP/IP or TTCP/IP works, and how to use MP libraries, I understand this is not an easy task to accomplish, but we really need to start working on that, with every performance problem we also need to start looking in the code and software architecture searching for scalability errors, not always will have site reliability engineers to help our application to be super reliable, super fast, all the time, even if we have these guys to help us, the solution can be found many iterations behind before the system starts losing points of our precious 99.99…99

And what if, we can correct coding errors fast enough or we can’t (in the case of proprietary software): tuning, will always be the answer, but like I said, tune all the layers, not only the kernel:

  • Tune for aggressive network throughput.
  • Tune timeouts.
  • Tune the socket parameters.
  • Tune shared filesystems.
  • Tune the schedulers.
  • Tune the complete architecture.
  • ….

There are many layers before can reach the kernel, and even if you want to tune the kernel you need to understand how the application works, communicate and use internal and external applications, libraries, and utilities.





Common multi-layer software architecture (4).



In common transactional architectures, tuning will work like a tourniquet in a bullet wounded, probably saving a life but In highly transactional applications, tuning is just to help the system, not to solve problems and your application will die slowly and painfully.

References:

  1. https://en.wikipedia.org/wiki/Monolithic_kernel
  2. http://dpdk.org/
  3. https://en.wikipedia.org/wiki/Exokernel
  4. http://www.guidanceshare.com

Wednesday, October 15, 2014

Why companies should embrace OSS and the DevOps movement

It’s not a secret that the best and most competitive technologies today exist in the world are based on some Open Source component, maybe the Linux kernel, GNU/Linux operating system, a version of BSD, modules, drivers, or the programming language is completely free or have a free compiler or interpreter.

On the other hand, we have a complex and extensive range of solutions that are born almost with every blink, we need options to integrate these solutions into existing technologies, we have to interconnect new software with hardware and almost all possible combinations can generate with these, so basically, no matter what kind of hardware or software want or have to work, if we want to survive in the era of cloud solutions, build an interface to interconnect them will always be the fastest solution, we will always have to be interconnected and this is a main principle that in cloud architectures is required to satisfy, where the hardware is defined by software and everything is “as a Service” (XaaS), everything has to be able to be interconnected with something else, in short, this is the Application Programming Interfaces (API) age.

Nowadays technologies are needed with API (RESTSOAP), communities (RedditIRC, …) accessible information (Blogs, Wikis), otherwise we have to be able to build there, the faster way possible, we need tools, and languages, plugins, everything that we can use to build these interconnections and better solutions, the only platform we can use to accomplish this with the speed needed is the Open Source, it’s not a mystery that Open Source technologies based move much faster than any other kind of proprietary technologies, so if we don’t want to be a technological dinosaur from one day to another we have to know about agile development languages (pythonrubygroovy, …), collaborative work applications (gitlabgithubtracbugzilla, …), source code management and revision control (gitsvn, …), tools that move and help us with the speed required to build new products, today knowing about Open Source, licenses, programming languages, communities is no longer an option.

Speed is not the only thing that Open Source gives, for any professional, having software freedom without limits, whether that solves the problem 100% or having a piece of software that delivers a solid foundation in order to modify and make what is required, POCs without asking for a copy of the software to a company is priceless, which also has an impact on the number of users downloading the same software, which can modify, test, add new characteristics.

I don’t want to expose a vision where nothing else exists besides the Open Source Software but to compete technologically we have to know the ecosystem or even better to innovate must know the tools and work with the right people for the job, people who can integrate all kinds of solutions, but who are this guys? These guys are like super-sysadmins + developers + Open Source gurus, all this and more, equals DevOps engineers (like me), better check this post by the puppetlabs people maybe in the future I’ll write my own.

But you don’t need to believe me, I can challenge you to find a job offer in a company that wants to innovate (any real IT company), regardless of the language or the country you are not looking for DevOps guys or Open Source knowledge.

Let’s cut to the chase, any company that wants to innovate technologically needs DevOps in its payroll, and any DevOps who wants to have a decent job requires Open Source knowledge.

Hope you enjoyed the reading, see you soon!!!
$ commit

Monday, October 8, 2012

Free EL YUM Repositories

If you are using some flavor of Enterprise Linux, eventually will get tired of downloading rpm packages from Here BTW, this is a really great page when you don't have access to FTP services (damn telecom/security guys). And eventually, you will need to have repositories on your server to solve the dependencies. Here are some free repositories provided by Oracle for FREE, but of course, with NO SUPPORT.

OEL 4/RHEL 4, Update 6 or Newer
# cd /etc/yum.repos.d
# wget http://public-yum.oracle.com/public-yum-el4.repo

 

OEL 5/RHEL 5
# cd /etc/yum.repos.d
# wget http://public-yum.oracle.com/public-yum-el5.repo

 

OEL 6/RHEL 6
# cd /etc/yum.repos.d
# wget http://public-yum.oracle.com/public-yum-ol6.repo

 

Oracle VM 2
# cd /etc/yum.repos.d
# wget http://public-yum.oracle.com/public-yum-ovm2.repo

 

After downloading the repo file, you should set the correct version of your Linux, enabling the "enabled" variable.


[root@openstack yum.repos.d]# cat /etc/yum.repos.d/public-yum-ol6.repo
[ol6_latest]
name=Oracle Linux $releasever Latest ($basearch)
baseurl=http://public-yum.oracle.com/repo/OracleLinux/OL6/latest/$basearch/
gpgkey=http://public-yum.oracle.com/RPM-GPG-KEY-oracle-ol6
gpgcheck=1
enabled=1


And of course, the EPEL repositories. Surf looking for your correct version here EPEL Repository and install the rpm, like this one:


[root@openstack yum.repos.d]# cat /etc/redhat-release
Red Hat Enterprise Linux Server release 6.2 (Santiago)
[root@openstack ~]# rpm -Uvh http://fedora.mirror.nexicom.net/epel/6/x86_64/epel-release-6-7.noarch.rpm
Retrieving http://fedora.mirror.nexicom.net/epel//6/x86_64/epel-release-6-7.noarch.rpm
warning: /var/tmp/rpm-tmp.h0G5aN: Header V3 RSA/SHA256 Signature, key ID 0608b895: NOKEY
Preparing...                ########################################### [100%]
   1:epel-release           ########################################### [100%]
[root@openstack ~]# ll /etc/yum.repos.d/
total 8
-rw-r--r--. 1 root root  957 May  9 10:55 epel.repo
-rw-r--r--. 1 root root 1056 May  9 10:55 epel-testing.repo