Search Results

Search found 62 results on 3 pages for 'ramon riera'.

Page 2/3 | < Previous Page | 1 2 3  | Next Page >

  • Multiple servers acting like a single one with all the hardware?

    - by marc.riera
    Hello, by now I have 10 servers for hpc, power computing oriented. My users need to launch several processes using qmake. The users are used to work with ubuntu 9.10, and the software from the repositories is switable for them. I've deployed ubuntu 9.10 to all 10 servers (pxe rocks). By now we work with parallel-ssh and cluster-ssh, which allows as to launch the same process to all servers. With this tools this tools the servers remain as independent but with the same software and the same launched command. Now we would like to go to next step and see all the servers as a single one with all the resources from the other 9 as if was its resources. The difference would be substantial in time to process and also time to design the command to launch. Any advice on wich software to use will be very useful? Thanks

    Read the article

  • Apache availability with the two front-ends on diferent locations. Is it possible?

    - by marc.riera
    Hello, I have to locations (office and service providers). One DNS(bind) serving our domain as authoritative, and a service provider webserver with our corporate web on a private server. So.. Now we are planing to upgrade our server on the ISP to a new one, and I would like to use this situation to improve our service. Is it possible to mount a high availability apache/mysql/php within to different locations? I will install a bind slave on the same new server, so I hope it will make things easier, but I need some hints and tips on how to ride it. THanks.

    Read the article

  • /usr/bin/sshd isn't linked against PAM on one of my systems. What is wrong and how can I fix it?

    - by marc.riera
    Hi, I'm using AD as my user account server with ldap. Most of the servers run with UsePam yes except this one, it has lack of pam support on sshd. root@linserv9:~# ldd /usr/sbin/sshd linux-vdso.so.1 => (0x00007fff621fe000) libutil.so.1 => /lib/libutil.so.1 (0x00007fd759d0b000) libz.so.1 => /usr/lib/libz.so.1 (0x00007fd759af4000) libnsl.so.1 => /lib/libnsl.so.1 (0x00007fd7598db000) libcrypto.so.0.9.8 => /usr/lib/libcrypto.so.0.9.8 (0x00007fd75955b000) libcrypt.so.1 => /lib/libcrypt.so.1 (0x00007fd759323000) libc.so.6 => /lib/libc.so.6 (0x00007fd758fc1000) libdl.so.2 => /lib/libdl.so.2 (0x00007fd758dbd000) /lib64/ld-linux-x86-64.so.2 (0x00007fd759f0e000) I have this packages installed root@linserv9:~# dpkg -l|grep -E 'pam|ssh' ii denyhosts 2.6-2.1 an utility to help sys admins thwart ssh hac ii libpam-modules 0.99.7.1-5ubuntu6.1 Pluggable Authentication Modules for PAM ii libpam-runtime 0.99.7.1-5ubuntu6.1 Runtime support for the PAM library ii libpam-ssh 1.91.0-9.2 enable SSO behavior for ssh and pam ii libpam0g 0.99.7.1-5ubuntu6.1 Pluggable Authentication Modules library ii libpam0g-dev 0.99.7.1-5ubuntu6.1 Development files for PAM ii openssh-blacklist 0.1-1ubuntu0.8.04.1 list of blacklisted OpenSSH RSA and DSA keys ii openssh-client 1:4.7p1-8ubuntu1.2 secure shell client, an rlogin/rsh/rcp repla ii openssh-server 1:4.7p1-8ubuntu1.2 secure shell server, an rshd replacement ii quest-openssh 5.2p1_q13-1 Secure shell root@linserv9:~# What I'm doing wrong? thanks. Edit: root@linserv9:~# cat /etc/pam.d/sshd # PAM configuration for the Secure Shell service # Read environment variables from /etc/environment and # /etc/security/pam_env.conf. auth required pam_env.so # [1] # In Debian 4.0 (etch), locale-related environment variables were moved to # /etc/default/locale, so read that as well. auth required pam_env.so envfile=/etc/default/locale # Standard Un*x authentication. @include common-auth # Disallow non-root logins when /etc/nologin exists. account required pam_nologin.so # Uncomment and edit /etc/security/access.conf if you need to set complex # access limits that are hard to express in sshd_config. # account required pam_access.so # Standard Un*x authorization. @include common-account # Standard Un*x session setup and teardown. @include common-session # Print the message of the day upon successful login. session optional pam_motd.so # [1] # Print the status of the user's mailbox upon successful login. session optional pam_mail.so standard noenv # [1] # Set up user limits from /etc/security/limits.conf. session required pam_limits.so # Set up SELinux capabilities (need modified pam) # session required pam_selinux.so multiple # Standard Un*x password updating. @include common-password Edit2: UsePAM yes fails With this configuration ssh fails to start : root@linserv9:/home/admmarc# cat /etc/ssh/sshd_config |grep -vE "^[ \t]*$|^#" Port 22 Protocol 2 ListenAddress 0.0.0.0 RSAAuthentication yes PubkeyAuthentication yes AuthorizedKeysFile .ssh/authorized_keys ChallengeResponseAuthentication yes UsePAM yes Subsystem sftp /usr/lib/sftp-server root@linserv9:/home/admmarc# The error it gives is as follows root@linserv9:/home/admmarc# /etc/init.d/ssh start * Starting OpenBSD Secure Shell server sshd /etc/ssh/sshd_config: line 75: Bad configuration option: UsePAM /etc/ssh/sshd_config: terminating, 1 bad configuration options ...fail! root@linserv9:/home/admmarc#

    Read the article

  • EMC/Legato/Networker Failed to recover files : Cross Platform Recovery not supported.

    - by marc.riera
    Software used to backup: EMC / Legato Networker legato server : windows legato clients: same hardware (2 years ago fedora something , now ubuntu ) Trying to recover from an old client, which is no longer available. So this is the thing. On 07/20/2008 we backed up a samba server(fedora something) to a tape , setting 1 year as browse policy and retention policy. Now this tape is recyclable. We took down the dns name. We deleted the legato client configuration. That legato client was reinstalled and is doing other stuff on ubuntu 10.04, with a different name but same ip. Now, 2 years and some month later #### Now we need to recover a folder from 2008 backup, on the fedora-samba-server. First thing, legato does not show the client name because the config was deleted. We create it again. We just set the old dns back on track, pointing the same ip, where the old server was, same MAC address ;). We created a new 'old client configuration' pointing to the new server. (different legato ip for client "I suppose" ) The ssid where the needed folder is on 2 tapes, 20 and 22. The index for that backup is on tape 21. We put this tapes on the jukebox (IBMT4000) -- not important for the issue -- All three tapes expired its browsable and recoverable time. So they are on recyclable. We get the clone id from the ssid with following command: mminfo -avot -q "ssid=<ssid>" -r cloneid We set the tapes to notrecyclable nsrmm -S <ssid>/<cloneid> -o notrecyclable We change the retention for the tapes for a future date nsrmm -S <ssid> -e 01/20/2011 We check the dates are correct : mminf -avV -q "ssid=<ssid>" -r ssbrowse(26),ssretent(26),savetime So far its OK. We close the terminal. Restart the server, just for being sure. Finally, we recover the index for that ssid where the folder should be. nsrck -L7 -t "07/20/2008" oldservername.domain.org There, we open the Networker User, select the server, select the old client as source, select the new client as destination. And this is what I get. imgur image of output -- http://i.imgur.com/1nOr8.png Should I understand that I need to install whatsoever operating system that was running on the old "linux server"/"networker client" to be able to restore 26Mb of files? thanks

    Read the article

  • Which is the cheapest machine where I can run linux and plug in some webcams? (and with network inte

    - by marc.riera
    I'm looking for a very cheap machine to run a linux distro for security(anti-thief) software. I would like to be able to connect to the network and a couple of webcams. May be ip-webcams or usb-webcams. The idea is having a machine with batteryes, laptop style, but there is no need to have a display/monitor attatched all the time. I'm planning to spend no more than 200$, in case it also gets stolen. Any advice on what to buy? (all modifications to this security plan are welcome) Thanks.

    Read the article

  • Bypassing Router's DNS Settings

    - by Ramon Marco Navarro
    Is there a way to bypass my ISP provided CPE/router's DNS settings? I'd like to use OpenDNS but I am unable to access the administrator acount of the CPE. I tried logging in using the default passwords (admin/admin, admin/1234, etc) to no avail. I found out later that the admin password is generated using a generator where you input the CPE's MAC address. I tried emailing the manufacturer of the CPE (Huawei, the CPE is Huawei BM625) and my ISP but they aren't replying. I also saw similar queries (lots of them!) at Huawei's forums, without a single reply. So as a last resort, I'd like to know a way to bypass the CPE's DNS settings. My subscription is for a WiMAX service. I'm using Windows 7 and have already set the DNS settings for the Local Area Connection. But I still am not seeing the "You are already using OpenDNS" text at OpenDNS's site. And when explicitly using the OpenDNS servers I still seem to get 208.69.38.150 rather than the expected 208.69.38.160: nslookup www.opendns.com. 208.67.222.222 Server: resolver1.opendns.com Address: 208.67.222.222 Non-authoritative answer: Name: www.opendns.com Address: 208.69.38.150

    Read the article

  • Windows Clients: Windows or Linux Domain Controller?

    - by Ramon Marco Navarro
    I'm planning to set up a domain controller for our small computer laboratory. I'm a little confused as to what operating system to use for our domain controller. What's in the lab: The lab has 25 units running a mix of Windows 7 and Windows XP. The domain controller will only have 2GB of RAM running a C2D E7200. (Is this enough?) What we want: The Domain Controller will also be running a git server. The Domain Controller will also be used as a general development machine (mostly Java, PHP). A way to centralize the updates for the windows clients, so that they won't have to download the same patches from the remote site. The machines would just query them from the local domain controller and get the updates from there. Our head recommended that I virtualize a Windows Server 2008 system under a Linux host and use the former as a domain controller and the latter for development or the other way around. A comparison of the advantages and disadvantages of using a Linux distribution or Windows Server 2008 in this situation would also be appreciated. As you may have noticed by now, I'm kinda new to setting up a domain so I hope you guys will be able to help me. Thank you.

    Read the article

  • Convert to WAN port

    - by Ramon Marco Navarro
    I have a ADSL2+ Modem/Router. Is it possible to set one out of four of the LAN ports as a WAN port? If yes, how? The brand/model is: Prolink H9200P. I already contacted Prolink about this and the site said to wait for one business day. But I'm still asking here just in case someone could answer faster than Prolink. Thank you.

    Read the article

  • How to restore a VirtualBox image given the .vdi file

    - by Ramon Tayag
    Background I have VirtualBox on Linux and recently the drive with the system files failed on me. I'm able to use the Live CD to view the files of the storage drive, and through this, I can copy my data files to another computer while I work on this one. How do I load the .vdi in another computer (or in this case, the fresh install of Ubuntu)? I see many samples online of how to export it, but it assumes the host is currently working. Thanks!

    Read the article

  • Windows 2003 Cluster: Failover Delay

    - by Ramon Marco Navarro
    I am testing the failover policies of our test failover cluster system. When I shutdown the node who is currently controlling the cluster (NODE1), it takes about 2 mins and 40 seconds before the next node on the preferred list (NODE2) takes control of the cluster. I tried changing the looksAlive and isAlive interval to 5000ms to all resources, but that didn't help. Looking at the Event Viewer of the remaining nodes, it shows that it was almost instantly detected that NODE1 was down. But it took another ~2:40 minutes for it to be removed from the live cluster list and for NODE2 to take over. Is there anyway of changing or shortening this "failover delay"? This is the setup of the cluster: (1) One ClusterDC connected to the public network (3) Three nodes running Win2003 with a quorum type of MNS Private network is connected to network hub ________________ _________________ (ClusterDC)=------=| |=------=(Node1)=------=| | |Public Network|=------=(Node2)=------=|Private Network| | (Switch) |=------=(Node3)=------=| (Hub) | ---------------- -----------------

    Read the article

  • Confused about SPF Record setup

    - by Ramon A.
    Hello, I'm confused on how I should set up SPF records for my multiple domains. Here is my configuration: the setup is: (a) domain1.com points to server1 (b) mail.domain1.com points to server2 (c) domain2.com is a vhost in server1 (d) domain3.com is a vhost in server1 (e) and so on.. I want the SPF record to be set up so that domain1.com, domain2.com, domain3.com are authorized to send emails using mail.domain1.com. I'm confused on wether to put the SPF record on each domain, or on the main server only.

    Read the article

  • How to automatically define functions and aliases on remote server after ssh login

    - by Ramon
    I want to define bash functions and aliases in my remote shell automatically on login. I can't put the definitions into .profile or similar because the users I log in as are often shared with others who use the same systems and I don't have control of this. What I'm trying to do is execute a few bash function definitions in the remote process and then continue as a login shell. I tried this but it did not work: cat ~/.profile - | ssh -tt user@host bash -l Any ideas?

    Read the article

  • SSH equivalent of .profile/.bashrc

    - by Ramon
    I am looking for a way to automatically define some aliases inside my session on any server I ssh to. I can't put them in the .bashrc files on the server because the user accounts I log in with are shared by other people and besides there are dozens of them and maintaining a script on every machine would be painful. I know I could use expect to type the aliases automatically but I was just wondering if OpenSSH has anything built-in that could conceivably be used to achieve this?

    Read the article

  • WinHttpCertCfg not importing certificate

    - by Ramon Zarazua
    I need to setup a deployment script that imports an SSL certificate that my service uses. I have tried importing with WinHttpCertCfg and with CertMgr to no avail. Here are the command-line arguments I have tried to use with both: winhttpcertcfg.exe -i <certname>.pfx -c LOCAL_MACHINE\My -p <password> -a <user service runs as> and CertMgr.exe -add -all -s -r localMachine -c <cert name> My It seems from what I have investigated that CertMgr does not allow you to import certificates with a password, so I'd rather get winhttpcertcfg working. When I run them I get the following output: WinHttpCertCfg: Microsoft (R) WinHTTP Certificate Configuration Tool Copyright (C) Microsoft Corporation 2001. CertMgr: CertMgr Succeeded However, when I look into the local machine certificates in MMC, try to load them from my service, or list it out through winhttpcertcfg, or even looking at the registry in HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\SystemCertificates\MY\Certificates it is not found. I have tried all of the following: If I install the cert manually (Through CertMgr.msc dialogs) it works. The user installing is running as administrator The user installing has full access on the certificate The tools print out an error when something is wrong (wrong password) Tried it in multiple machines (All of them server 2008 R2) At this point I am officially out of ideas. Thank you.

    Read the article

  • Enhance GIMP’s Image Editing Power with Gimp Paint Studio

    - by Asian Angel
    Does your GIMP installation need a little super-charging? Using Gimp Paint Studio you can add a wonderful set of brushes, tools, and more to GIMP and take your work up to the next level. For our example we chose to install the beta version of Gimp Paint Studio on Ubuntu 10.10. Once you download the .zip file and unzip it, all that you need to do is manually transfer the contents shown here to the appropriate GIMP folders on your system. You can see the location of the destination folders here on our system… Note: Make certain to make a back-up copy of the “sessionrc and toolrc files” before you transfer Gimp Paint Studio into your installation (in case you would like to or need to revert back to the originals later). When you finish transferring the files start GIMP up and get ready to have fun. And if your experience is like ours then you should see a noticeable difference in window size and arrangement from the default settings. Here are some samples of the exceptional artwork done by Ramon Miranda and Mozart Couto using Gimp Paint Studio. Really impressive! Artwork by Ramon Miranda & Mozart Couto. Watch the introduction video and see Gimp Paint Studio in action. Download Gimp Paint Studio for Linux, Windows, and Mac [Gimp Paint Studio Homepage] *Keep in mind that there are stable and beta releases available, so choose the version that you are most comfortable with using. View the Installation Guides for Gimp Paint Studio *Page contains wonderful “video and written” versions for adding/installing Gimp Paint Studio to your system. Gimp Paint Studio Video Tutorials Library Visit the Gimp Paint Studio Gallery Latest Features How-To Geek ETC Should You Delete Windows 7 Service Pack Backup Files to Save Space? What Can Super Mario Teach Us About Graphics Technology? Windows 7 Service Pack 1 is Released: But Should You Install It? How To Make Hundreds of Complex Photo Edits in Seconds With Photoshop Actions How to Enable User-Specific Wireless Networks in Windows 7 How to Use Google Chrome as Your Default PDF Reader (the Easy Way) Enhance GIMP’s Image Editing Power with Gimp Paint Studio Reclaim Vertical UI Space by Moving Your Tabs to the Side in Firefox Wind and Water: Puzzle Battles – An Awesome Game for Linux and Windows How Star Wars Changed the World [Infographic] Tabs Visual Manager Adds Thumbnailed Tab Switching to Chrome Daisies and Rye Swaying in the Summer Wind Wallpaper

    Read the article

  • Creating a multi-tenant application using PostgreSQL's schemas and Rails

    - by ramon.tayag
    Stuff I've already figured out I'm learning how to create a multi-tenant application in Rails that serves data from different schemas based on what domain or subdomain is used to view the application. I already have a few concerns answered: How can you get subdomain-fu to work with domains as well? Here's someone that asked the same question which leads you to this blog. What database, and how will it be structured? Here's an excellent talk by Guy Naor, and good question about PostgreSQL and schemas. I already know my schemas will all have the same structure. They will differ in the data they hold. So, how can you run migrations for all schemas? Here's an answer. Those three points cover a lot of the general stuff I need to know. However, in the next steps I seem to have many ways of implementing things. I'm hoping that there's a better, easier way. Finally, to my question When a new user signs up, I can easily create the schema. However, what would be the best and easiest way to load the structure that the rest of the schemas already have? Here are some questions/scenarios that might give you a better idea. Should I pass it on to a shell script that dumps the public schema into a temporary one, and imports it back to my main database (pretty much like what Guy Naor says in his video)? Here's a quick summary/script I got from the helpful #postgres on freenode. While this will probably work, I'm gonna have to do a lot of stuff outside of Rails, which makes me a bit uncomfortable.. which also brings me to the next question. Is there a way to do this straight from Ruby on Rails? Like create a PostgreSQL schema, then just load the Rails database schema (schema.rb - I know, it's confusing) into that PostgreSQL schema. Is there a gem/plugin that has these things already? Methods like "create_pg_schema_and_load_rails_schema(the_new_schema_name)". If there's none, I'll probably work at making one, but I'm doubtful about how well tested it'll be with all the moving parts (especially if I end up using a shell script to create and manage new PostgreSQL schemas). Thanks, and I hope that wasn't too long! UPDATE May 11, 2010 11:26 GMT+8 Since last night I've been able to get a method to work that creates a new schema and loads schema.rb into it. Not sure if what I'm doing is correct (seems to work fine, so far) but it's a step closer at least. If there's a better way please let me know. module SchemaUtils def self.add_schema_to_path(schema) conn = ActiveRecord::Base.connection conn.execute "SET search_path TO #{schema}, #{conn.schema_search_path}" end def self.reset_search_path conn = ActiveRecord::Base.connection conn.execute "SET search_path TO #{conn.schema_search_path}" end def self.create_and_migrate_schema(schema_name) conn = ActiveRecord::Base.connection schemas = conn.select_values("select * from pg_namespace where nspname != 'information_schema' AND nspname NOT LIKE 'pg%'") if schemas.include?(schema_name) tables = conn.tables Rails.logger.info "#{schema_name} exists already with these tables #{tables.inspect}" else Rails.logger.info "About to create #{schema_name}" conn.execute "create schema #{schema_name}" end # Save the old search path so we can set it back at the end of this method old_search_path = conn.schema_search_path # Tried to set the search path like in the methods above (from Guy Naor) # conn.execute "SET search_path TO #{schema_name}" # But the connection itself seems to remember the old search path. # If set this way, it works. conn.schema_search_path = schema_name # Directly from databases.rake. # In Rails 2.3.5 databases.rake can be found in railties/lib/tasks/databases.rake file = "#{Rails.root}/db/schema.rb" if File.exists?(file) Rails.logger.info "About to load the schema #{file}" load(file) else abort %{#{file} doesn't exist yet. It's possible that you just ran a migration!} end Rails.logger.info "About to set search path back to #{old_search_path}." conn.schema_search_path = old_search_path end end

    Read the article

  • Creating a Reverse Proxy using Jpcap

    - by Ramon Marco Navarro
    I need to create a program that receives HTTP request and forwards those requests to the web servers. I have successfully made this using only Java Sockets but the client needed the program to be implemented in Jpcap. I'd like to know if this is possible and what literature I should be reading for this project. This is what I have now by stitching together pieces from the Jpcap tutorial: import java.net.InetAddress; import java.io.*; import jpcap.*; import jpcap.packet.*; public class Router { public static void main(String args[]) { //Obtain the list of network interfaces NetworkInterface[] devices = JpcapCaptor.getDeviceList(); //for each network interface for (int i = 0; i < devices.length; i++) { //print out its name and description System.out.println(i+": "+devices[i].name + "(" + devices[i].description+")"); //print out its datalink name and description System.out.println(" datalink: "+devices[i].datalink_name + "(" + devices[i].datalink_description+")"); //print out its MAC address System.out.print(" MAC address:"); for (byte b : devices[i].mac_address) System.out.print(Integer.toHexString(b&0xff) + ":"); System.out.println(); //print out its IP address, subnet mask and broadcast address for (NetworkInterfaceAddress a : devices[i].addresses) System.out.println(" address:"+a.address + " " + a.subnet + " "+ a.broadcast); } int index = 1; // set index of the interface that you want to open. //Open an interface with openDevice(NetworkInterface intrface, int snaplen, boolean promics, int to_ms) JpcapCaptor captor = null; try { captor = JpcapCaptor.openDevice(devices[index], 65535, false, 20); captor.setFilter("port 80 and host 192.168.56.1", true); } catch(java.io.IOException e) { System.err.println(e); } //call processPacket() to let Jpcap call PacketPrinter.receivePacket() for every packet capture. captor.loopPacket(-1,new PacketPrinter()); captor.close(); } } class PacketPrinter implements PacketReceiver { //this method is called every time Jpcap captures a packet public void receivePacket(Packet p) { JpcapSender sender = null; try { NetworkInterface[] devices = JpcapCaptor.getDeviceList(); sender = JpcapSender.openDevice(devices[1]); } catch(IOException e) { System.err.println(e); } IPPacket packet = (IPPacket)p; try { // IP Address of machine sending HTTP requests (the client) // It's still on the same LAN as the servers for testing purposes. packet.dst_ip = InetAddress.getByName("192.168.56.2"); } catch(java.net.UnknownHostException e) { System.err.println(e); } //create an Ethernet packet (frame) EthernetPacket ether=new EthernetPacket(); //set frame type as IP ether.frametype=EthernetPacket.ETHERTYPE_IP; //set source and destination MAC addresses // MAC Address of machine running reverse proxy server ether.src_mac = new MacAddress("08:00:27:00:9C:80").getAddress(); // MAC Address of machine running web server ether.dst_mac = new MacAddress("08:00:27:C7:D2:4C").getAddress(); //set the datalink frame of the packet as ether packet.datalink=ether; //send the packet sender.sendPacket(packet); sender.close(); //just print out a captured packet System.out.println(packet); } } Any help would be greatly appreciated. Thank you.

    Read the article

  • Set username credential for a new channel without creating a new factory

    - by Ramon
    I have a backend service and front-end services. They communicate via the trusted subsystem pattern. I want to transfer a username from the frontend to the backend and do this via username credentials as found here: http://msdn.microsoft.com/en-us/library/ms730288.aspx This does not work in our scenerio where the front-end builds a backend service channel factory via: channelFactory = new ChannelFactory<IBackEndService>(.....); Creating a new channel is done via die channel factory. I can only set the credentials one time after that I get an exception that the username object is read-only. channelFactory.Credentials.Username.Username = "myCoolFrontendUser"; var channel = channelFactory.CreateChannel(); Is there a way to create the channel factory only one time as this is expensive to create and then specify username credential when creating a channel?

    Read the article

  • Test (with RSpec) a controller outside of a Rails environment

    - by ramon.tayag
    I'm creating a gem that will generate a controller for the Rails app that will use it. It's been a trial and error process for me when trying to test a controller. When testing models, it's been pretty easy, but when testing controllers, ActionController::TestUnit is not included (as described here). I've tried requiring it, and all similar sounding stuff in Rails but it hasn't worked. What would I need to require in the spec_helper to get the test to work? Thanks!

    Read the article

  • What's a good tutorial for creating a gem with RSpec?

    - by ramon.tayag
    I've been searching around for ways to create a gem with RSpec, but haven't found descriptive tutorials. I started out with Ryan Bates' Making a gem, but I'm looking for a tutorial that discusses creating an acts_as style gem with RSpec. By acts_as, I mean to say that the gem adds certain methods to an existing class in Rails. Why is this important? Because I've found gem templates like New Gem, got a spec to run but when I try to test an Active Record object it starts choking. I've tried requiring active_record in spec_helper.rb but I must be doing something wrong because it doesn't solve the problem. When it comes to plugins, I found this Rails Guide. If there's a gem version for that around that'd be awesome. Thanks guys! P.S. I love screencasts.

    Read the article

  • Getting RPXNow and Facebook Open Graph to Play Nicely

    - by ramon.tayag
    A requirement to use the RPXNow is to set your Facebook application's connect url to http://mydomain.rpxnow.com. I was just trying to implement Facebook's Open Graph and I see that it tells you to set the Base Domain to the domain that will contain the app_id. However, Facebook does not allow these two domains to look different. When I try to set the base url to mydomain.com, I get this error: Validation failed. Base Domain is not valid. Connect URL must be derived from your Base Domain. Should I create two apps - one for use with RPXNow, and another for use with Open Graph? If not, what should I do? Thanks

    Read the article

  • send arrow keys using ganymed ssh java

    - by José Ramón Pérez Rubio
    I am using Ganymed ssh to connect to a remote machine and apart from sending commands I need to send the arrows keys (left and right keys). I can send commands but when I send the arrows keys nothing happends. This is what I have: public boolean createShell() throws Exception { try { // ... m_session= connection.openSession(); m_commandWriter = new OutputStreamWriter(m_session.getStdin()); String encoding=m_commandWriter.getEncoding(); //encoding is UFT8 m_errorPipe=new SSHSyncPipe(m_session.getStderr()); m_outputPipe=new SSHSyncPipe(m_session.getStdout()); m_outputPipe.start(); m_errorPipe.start(); // m_session.requestPTY("bash"); m_session.requestDumbPTY(); m_session.startShell(); m_shellCreated=true; return true; } } So if I use m_commandWriter.write(ls"\r\n"); m_commandWriter.flush(); It works, but m_commandWriter.write(37);//37 is the code for left arrow m_commandWriter.flush(); Doesn't work. Does anyone know what I am doing wrong? Thank you

    Read the article

  • Mysterious constraints problem with SQL Server 2000

    - by Ramon
    Hi all I'm getting the following error from a VB NET web application written in VS 2003, on framework 1.1. The web app is running on Windows Server 2000, IIS 5, and is reading from a SQL server 2000 database running on the same machine. System.Data.ConstraintException: Failed to enable constraints. One or more rows contain values violating non-null, unique, or foreign-key constraints. at System.Data.DataSet.FailedEnableConstraints() at System.Data.DataSet.EnableConstraints() at System.Data.DataSet.set_EnforceConstraints(Boolean value) at System.Data.DataTable.EndLoadData() at System.Data.Common.DbDataAdapter.FillFromReader(Object data, String srcTable, IDataReader dataReader, Int32 startRecord, Int32 maxRecords, DataColumn parentChapterColumn, Object parentChapterValue) at System.Data.Common.DbDataAdapter.Fill(DataSet dataSet, String srcTable, IDataReader dataReader, Int32 startRecord, Int32 maxRecords) at System.Data.Common.DbDataAdapter.FillFromCommand(Object data, Int32 startRecord, Int32 maxRecords, String srcTable, IDbCommand command, CommandBehavior behavior) at System.Data.Common.DbDataAdapter.Fill(DataSet dataSet, Int32 startRecord, Int32 maxRecords, String srcTable, IDbCommand command, CommandBehavior behavior) at System.Data.Common.DbDataAdapter.Fill(DataSet dataSet) The problem appears when the web app is under a high load. The system runs fine when volume is low, but when the number of requests becomes high, the system starts rejecting incoming requests with the above exception message. Once the problem appears, very few requests actually make it through and get processed normally, about 2 in every 30. The vast majority of requests fail, until a SQL Server restart or IIS reset is performed. The system then start processing requests normally, and after some time it starts throwing the same error. The error occurs when a data adapter runs the Fill() method against a SELECT statement, to populate a strongly-typed dataset. It appears that the dataset does not like the data it is given and throws this exception. This error occurs on various SELECT statements, acting on different tables. I have regenerated the dataset and checked the relevant constraints, as well as the table from which the data is read. Both the dataset definition and the data in the table are fine. Admittedly, the hardware running both the web app and SQL Server 2000 is seriously outdated, considering the numbers of incoming requests it currently receives. The amount of RAM consumed by SQL Server is dynamically allocated, and at peak times SQL Server can consume up to 2.8 GB out of a total of 3.5 GB on the server. At first I suspected some sort of index or database corruption, but after running DBCC CHECKDB, no errors were found in the database. So now I'm wondering whether this error is a result of the hardware limitations of the system. Is it possible for SQL Server to somehow mess up the data it's supposed to pass to the dataset, resulting in constraint violation due to, say, data type/length mismatch? I tried accessing the RowError messages of the data rows in the retrieved dataset tables but I kept getting empty strings. I know that HasErrors = true for the datatables in question. I have not set the EnableConstraints = false, and I don't want to do that. Thanks in advance. Ray

    Read the article

< Previous Page | 1 2 3  | Next Page >