Search Results

Search found 41826 results on 1674 pages for 'underlying type'.

Page 588/1674 | < Previous Page | 584 585 586 587 588 589 590 591 592 593 594 595  | Next Page >

  • HP Storageworks 448 tape drive input/output error with Ubuntu

    - by Dan D
    I'm trying to set a backup to tape of a machine using flexbackup. However any attempt to write to the tape drive (via either flexbackup or just tar) results in "/dev/st0: Input/output error" The machine seems to recognise the drive (HP Storageworks Ultrium 448) and that there's a tape in it and "mt status" seems to work... "mt -f /dev/st0 rewind" or "erase" throw no errors... root@stor001:/# mt status SCSI 2 tape drive: File number=0, block number=0, partition=0. Tape block size 0 bytes. Density code 0x42 (LTO-2). Soft error count since last status=0 General status bits on (41010000): BOT ONLINE IM_REP_EN root@stor001:/# cat /proc/scsi/scsi Attached devices: Host: scsi0 Channel: 00 Id: 00 Lun: 00 Vendor: HL-DT-ST Model: DVDRAM GSA-4084N Rev: KS02 Type: CD-ROM ANSI SCSI revision: 05 Host: scsi2 Channel: 00 Id: 03 Lun: 00 Vendor: HP Model: Ultrium 2-SCSI Rev: S65D Type: Sequential-Access ANSI SCSI revision: 03 "tell" does however root@stor001:/# mt -f /dev/st0 tell /dev/st0: Input/output error Based on a forum post I found, I tried: root@stor001:/# dd if=/dev/zero of=/dev/nst0 bs=1024 count=10 10+0 records in 10+0 records out 10240 bytes (10 kB) copied, 5.0815 s, 2.0 kB/s which gave the person on the forum an error but seems to work for me. If anyone has any suggestions, I'm all ears...

    Read the article

  • Looking for updated BIOS for '99 Gateway in order to format/recognize >127 GB HD

    - by Jeff
    I have a '99 Gateway that's apparently too old for even Gateway to acknowledge it exists. Want to use it as a media hub and put in a 320GB HD, but it will not format above 127GB even running Win XP SP3. Read somewhere that upgrading the BIOS may do the trick, but I can't find the correct BIOS, and GW has been no help. Hoping I can just upgrade the BIOS, which is 11 years old. Any help would be much appreciated! I don't know where to look, and searches have been fruitless. System info: OS Name Microsoft Windows XP Home Edition Version 5.1.2600 Service Pack 3 Build 2600 OS Manufacturer Microsoft Corporation System Name xxxx System Manufacturer Gateway System Model TABOR_II System Type X86-based PC Processor x86 Family 6 Model 7 Stepping 3 GenuineIntel ~596 Mhz BIOS Version/Date Intel Corp. 4W4SB0X0.15A.0015.P10, 9/28/1999 SMBIOS Version 2.1 BIOS info (from a free app I located): BIOS Type: Phoenix BIOS Date: September 28th 1999 BIOS ID: 4W4SB0X0.15A.0015.P10.9909281445-None BIOS OEM: 4W4SB0X0.15A.0015.P10 Chipset: Intel 440BX/ZX rev 3 SuperIO: SMC 70x or 80x rev 0 at port 0370 Manufacturer: Gateway Motherboard: WS440BX

    Read the article

  • Sshfs is not working..

    - by Devrim
    Hi, When I run sshpass -p 'mypass' sshfs 'root'@'68.19.40.16':/ '/dir' -o StrictHostKeyChecking=no,debug It successfully mounts but it runs on foreground. When I run without 'debug' parameter, it doesn't mount at all. Server is ubuntu 8.04 Any ideas why? UPDATE: When I run the command as ROOT it does mount. It doesn't work with other users. here is the output of an unsuccessful mount $ sshpass -p 'pass' sshfs 'root'@'68.1.1.1':/ '/s6' -o StrictHostKeyChecking=no,sshfs_debug,loglevel=debug debug1: Reading configuration data /etc/ssh/ssh_config debug1: Applying options for * debug1: Connecting to 68.1.1.1 [68.1.1.1] port 22. debug1: Connection established. debug1: identity file /var/www/vhosts/devrim.kodingen.com/.ssh/id_rsa type -1 debug1: identity file /var/www/vhosts/devrim.kodingen.com/.ssh/id_dsa type -1 debug1: Remote protocol version 2.0, remote software version OpenSSH_5.1p1 Debian-5 debug1: match: OpenSSH_5.1p1 Debian-5 pat OpenSSH* debug1: Enabling compatibility mode for protocol 2.0 debug1: Local version string SSH-2.0-OpenSSH_4.7p1 Debian-8ubuntu1.2 debug1: SSH2_MSG_KEXINIT sent debug1: SSH2_MSG_KEXINIT received debug1: kex: server->client aes128-cbc hmac-md5 none debug1: kex: client->server aes128-cbc hmac-md5 none debug1: SSH2_MSG_KEX_DH_GEX_REQUEST(1024<1024<8192) sent debug1: expecting SSH2_MSG_KEX_DH_GEX_GROUP debug1: SSH2_MSG_KEX_DH_GEX_INIT sent debug1: expecting SSH2_MSG_KEX_DH_GEX_REPLY Warning: Permanently added '68.1.1.1' (RSA) to the list of known hosts. debug1: ssh_rsa_verify: signature correct debug1: SSH2_MSG_NEWKEYS sent debug1: expecting SSH2_MSG_NEWKEYS debug1: SSH2_MSG_NEWKEYS received debug1: SSH2_MSG_SERVICE_REQUEST sent debug1: SSH2_MSG_SERVICE_ACCEPT received debug1: Authentications that can continue: publickey,password debug1: Next authentication method: publickey debug1: Trying private key: /var/www/vhosts/devrim.kodingen.com/.ssh/id_rsa debug1: Trying private key: /var/www/vhosts/devrim.kodingen.com/.ssh/id_dsa debug1: Next authentication method: password debug1: Authentication succeeded (password). debug1: channel 0: new [client-session] debug1: Entering interactive session. debug1: Sending environment. debug1: Sending env LANG = en_GB.UTF-8 debug1: Sending subsystem: sftp Server version: 3 debug1: channel 0: free: client-session, nchannels 1 debug1: fd 0 clearing O_NONBLOCK debug1: Killed by signal 1.

    Read the article

  • ProFTPD / PAM issues with new centos/virtualmin install

    - by iamthewit
    I just installed CentOS 5.4 on a rackspace cloud server and installed virtualmin which all seemed to go fine. The only problem I have is that I can not access the virtual servers directories via FTP. I get the following from filezilla: Status: Connecting to 1.1.1.1:21... Status: Connection established, waiting for welcome message... Response: 220 FTP Server ready. Command: USER username Response: 331 Password required for username. Command: PASS *************** Response: 230 User username logged in. Status: Connected Status: Retrieving directory listing... Command: PWD Response: 257 "/" is current directory. Command: TYPE I Response: 200 Type set to I Command: PASV Response: 227 Entering Passive Mode (1,1,1,1,216,214) Command: LIST Error: Connection timed out Error: Failed to retrieve directory listing and I get this from my /var/secure/log file Sep 22 19:40:42 stickeeserver proftpd: pam_unix(proftpd:session): session opened for user username by (uid=0) Sep 22 19:40:42 server proftpd[14051]: 94.136.40.82 (::ffff:217.207.31.60[::ffff:217.207.31.60]) - USER nastypasty: Login successful. Sep 22 19:40:42 server proftpd[14051]: 94.136.40.82 (::ffff:217.207.31.60[::ffff:217.207.31.60]) - Preparing to chroot to directory '/home/username' Sep 22 19:40:42 server proftpd[14051]: 94.136.40.82 (::ffff:217.207.31.60[::ffff:217.207.31.60]) - mod_delay/0.5: delaying for 728 usecs Sep 22 19:40:42 server proftpd[14051]: 94.136.40.82 (::ffff:217.207.31.60[::ffff:217.207.31.60]) - error setting IPV6_V6ONLY: Protocol not available Any help would be greatly appreciated, I'm not totally new to Linux but it's not my strongest subject. I do like to know exactly why problems occur though and how exactly to fix them so the more detail the better! cheers

    Read the article

  • To sample or not to sample...

    - by [email protected]
    Ideally, we would know the exact answer to every question. How many people support presidential candidate A vs. B? How many people suffer from H1N1 in a given state? Does this batch of manufactured widgets have any defective parts? Knowing exact answers is expensive in terms of time and money and, in most cases, is impractical if not impossible. Consider asking every person in a region for their candidate preference, testing every person with flu symptoms for H1N1 (assuming every person reported when they had flu symptoms), or destructively testing widgets to determine if they are "good" (leaving no product to sell). Knowing exact answers, fortunately, isn't necessary or even useful in many situations. Understanding the direction of a trend or statistically significant results may be sufficient to answer the underlying question: who is likely to win the election, have we likely reached a critical threshold for flu, or is this batch of widgets good enough to ship? Statistics help us to answer these questions with a certain degree of confidence. This focuses on how we collect data. In data mining, we focus on the use of data, that is data that has already been collected. In some cases, we may have all the data (all purchases made by all customers), in others the data may have been collected using sampling (voters, their demographics and candidate choice). Building data mining models on all of your data can be expensive in terms of time and hardware resources. Consider a company with 40 million customers. Do we need to mine all 40 million customers to get useful data mining models? The quality of models built on all data may be no better than models built on a relatively small sample. Determining how much is a reasonable amount of data involves experimentation. When starting the model building process on large datasets, it is often more efficient to begin with a small sample, perhaps 1000 - 10,000 cases (records) depending on the algorithm, source data, and hardware. This allows you to see quickly what issues might arise with choice of algorithm, algorithm settings, data quality, and need for further data preparation. Instead of waiting for a model on a large dataset to build only to find that the results don't meet expectations, once you are satisfied with the results on the initial sample, you can  take a larger sample to see if model quality improves, and to get a sense of how the algorithm scales to the particular dataset. If model accuracy or quality continues to improve, consider increasing the sample size. Sampling in data mining is also used to produce a held-aside or test dataset for assessing classification and regression model accuracy. Here, we reserve some of the build data (data that includes known target values) to be used for an honest estimate of model error using data the model has not seen before. This sampling transformation is often called a split because the build data is split into two randomly selected sets, often with 60% of the records being used for model building and 40% for testing. Sampling must be performed with care, as it can adversely affect model quality and usability. Even a truly random sample doesn't guarantee that all values are represented in a given attribute. This is particularly troublesome when the attribute with omitted values is the target. A predictive model that has not seen any examples for a particular target value can never predict that target value! For other attributes, values may consist of a single value (a constant attribute) or all unique values (an identifier attribute), each of which may be excluded during mining. Values from categorical predictor attributes that didn't appear in the training data are not used when testing or scoring datasets. In subsequent posts, we'll talk about three sampling techniques using Oracle Database: simple random sampling without replacement, stratified sampling, and simple random sampling with replacement.

    Read the article

  • Java Spotlight Episode 103: 2012 Duke Choice Award Winners

    - by Roger Brinkley
    Our annual interview with the 2012 Duke Choice Award Winners recorded live at the JavaOne 2012. Right-click or Control-click to download this MP3 file. You can also subscribe to the Java Spotlight Podcast Feed to get the latest podcast automatically. If you use iTunes you can open iTunes and subscribe with this link:  Java Spotlight Podcast in iTunes. Show Notes Events Oct 13, Devoxx 4 Kids Nederlands Oct 15-17, JAX London Oct 20, Devoxx 4 Kids Français Oct 22-23, Freescale Technology Forum - Japan, Tokyo Oct 30-Nov 1, Arm TechCon, Santa Clara Oct 31, JFall, Netherlands Nov 2-3, JMagreb, Morocco Nov 13-17, Devoxx, Belgium Feature Interview Duke Choice Award Winners 2012 - Show Presentation London Java CommunityThe second user group receiving a Duke’s Choice Award this year, the London Java Community (LJC) and its users have been active in the OpenJDK, the Java Community Process (JCP) and other efforts within the global Java community. Student Nokia Developer GroupThis year’s student winner, Ram Kashyap, is the founder and president of the Nokia Student Network, and was profiled in the “The New Java Developers” feature in the March/April 2012 issue of Java Magazine. Since then, Ram has maintained a hectic pace, graduating from the People’s Education Society Institute of Technology in Bangalore, India, while working on a Java mobile startup and training students on Java ME. Jelastic, Inc.Moving existing Java applications to the cloud can be a daunting task, but startup Jelastic, Inc. offers the first all-Java platform-as-a-service (PaaS) that enables existing Java applications to be deployed in the cloud without code changes or lock-in. NATOThe first-ever Community Choice Award goes to the MASE Integrated Console Environment (MICE) in use at NATO. Built in Java on the NetBeans platform, MICE provides a high-performance visualization environment for conducting air defense and battle-space operations. DuchessRather than focus on a specific geographic area like most Java User Groups (JUGs), Duchess fosters the participation of women in the Java community worldwide. The group has more than 500 members in 60 countries, and provides a platform through which women can connect with each other and get involved in all aspects of the Java community. AgroSense ProjectImproving farming methods to feed a hungry world is the goal of AgroSense, an open source farm information management system built in Java and the NetBeans platform. AgroSense enables farmers, agribusinesses, suppliers and others to develop modular applications that will easily exchange information through a common underlying NetBeans framework. Apache Software Foundation Hadoop ProjectThe Apache Software Foundation’s Hadoop project, written in Java, provides a framework for distributed processing of big data sets across clusters of computers, ranging from a few servers to thousands of machines. This harnessing of large data pools allows organizations to better understand and improve their business. Parleys.comE-learning specialist Parleys.com, based in Brussels, Belgium, uses Java technologies to bring online classes and full IT conferences to desktops, laptops, tablets and mobile devices. Parleys.com has hosted more than 1,700 conferences—including Devoxx and JavaOne—for more than 800,000 unique visitors. Winners not presenting at JavaOne 2012 Duke Choice Awards BOF Liquid RoboticsRobotics – Liquid Robotics is an ocean data services provider whose Wave Glider technology collects information from the world’s oceans for application in government, science and commercial applications. The organization features the “father of Java” James Gosling as its chief software architect.United Nations High Commissioner for RefugeesThe United Nations High Commissioner for Refugees (UNHCR) is on the front lines of crises around the world, from civil wars to natural disasters. To help facilitate its mission of humanitarian relief, the UNHCR has developed a light-client Java application on the NetBeans platform. The Level One registration tool enables the UNHCR to collect information on the number of refugees and their water, food, housing, health, and other needs in the field, and combines that with geocoding information from various sources. This enables the UNHCR to deliver the appropriate kind and amount of assistance where it is needed.

    Read the article

  • DNS configuration issues. Clients inside network unable to resolve DNS server's name

    - by hydroparadise
    Setup the DNS service on Ubuntu 12.04 64 and all apears to be well except that my dhcp clients do not recognize my DNS servers hostname. When doing a nslookup on one of my Windows clients, I get C:\Users\chad>nslookup Default Server: UnKnown Address: 192.168.1.2 Where I would expect the FQDN in the spot where UnKnown is seen. The DNS server know's itself pretty well, but I think only because I have an entry in the /etc/hosts file to resolve. There's so many places to look I don't even know where to begin. Are there any logs I can look at? Something. Places I've looked at and configured: /etc/bind/zones/domain.com.db /etc/bind/zones/rev.1.168.192.in-addr.arpa /etc/bind/named.conf.local EDIT: '/etc/bind/zones/rev.1.168.192.in-addr.arpa' @ IN SOA dns-serv1.mydomain.com [email protected]. ( 2006081401; 28800; 604800; 604800; 86400 ) IN NS dns-serv1.mydomain.com. 2 IN PTR dns-serv1 2 IN PTR mydomain.com EDIT 2: '/etc/bind/named.conf.local' zone "mydomain.com" { type master; file "/etc/bind/zones/mydomain.com.db"; }; zone "1.168.192.in-addr.arpa" { type master; file "/etc/bind/zones/rev.0.168.192.in-addr.arpa"; };

    Read the article

  • How to use my computer as a Headset device for my phone with Bluetooth?

    - by TheJelly
    I want to extract the audio from my phone (the analog TV and FM/AM receiver mainly) and play it through my computer speakers. There is a headphone jack but it is of non-standard size (probably a micro-jack) and I do not have access to a shop that sells that kind of equipment in my area so doing this with Bluetooth is the only solution I can foresee. Both my laptop and my phone support A2DP but for some reason the service (from the phone) does not show up while I add a new connection and the phone does not let me initiate a connection with any other profile except FTP (although it detects other services in the service list like A2DP and works perfectly fine with other profiles like DUN, HID, OPP, SSP if the connection is started through the computer). I am currently using the latest version of the Toshiba stack, I have tried using WIDCOMM but it refuses to install drivers for both the internal Bluetooth (which is a Broadcom device) and the USB Bluetooth that I use on my desktop. The standard Microsoft stack (generic driver) does install but it does not work with both of my devices as they do not detect any Bluetooth devices when scanning. With BlueSoleil (the default stack that came with the USB Bluetooth) I could set my device as "headset" instead of only "laptop/desktop", and this allowed both my phones to detect my laptop as a device they can use as a headset, but the problem with this stack was that only the older phone could actually connect to my laptop and that the internal Bluetooth could not be used. Basically, I want to set the device type as a "headset" for my phone using the Toshiba stack like I did with BlueSoleil. Is there any way this could be done? Thanks. Image: Device type selection http://i.stack.imgur.com/drjC6.jpg

    Read the article

  • FTP on Linux "Failed to retrieve directory listing" not firewall issue

    - by Jaka Prasnikar
    I've got an VPS in germany running Debian X64. I have very strange issue. I have ISPConfig CP installed using proftpd and I can not connect to FTP by any means. Few hours ago I've had installed DirectAdmin on CentOS same VPS and same issue. Simply when I connect to FTP server I get these: Status: Resolving address of web02.defikon.com Status: Connecting to 130.255.190.71:21... Status: Connection established, waiting for welcome message... Response: 220---------- Welcome to Pure-FTPd [privsep] [TLS] ---------- Response: 220-You are user number 1 of 50 allowed. Response: 220-Local time is now 12:15. Server port: 21. Response: 220-This is a private system - No anonymous login Response: 220-IPv6 connections are also welcome on this server. Response: 220 You will be disconnected after 15 minutes of inactivity. Command: USER default1 Response: 331 User default1 OK. Password required Command: PASS ****** Response: 230-User default1 has group access to: client0 sshusers Response: 230 OK. Current restricted directory is / Command: OPTS UTF8 ON Response: 200 OK, UTF-8 enabled Status: Connected Status: Retrieving directory listing... Command: PWD Response: 257 "/" is your current location Command: TYPE I Response: 200 TYPE is now 8-bit binary Command: PASV Error: Connection timed out Error: Failed to retrieve directory listing I even tried telnet localhost 21 and the same happends. Once I issue command "LIST" I get time out. I've tried every thing and I can't get this to work =( Please help ! P.S.: iptables is turned off.

    Read the article

  • syslog-ng and nging logs to mysql

    - by Katafalkas
    So couple of days ago I asked how to log php and nginx logs to centralized MySQL database, and m0ntassar gave a perfect answer :) cheer ! The problem I am facing now is that I can not seem to get it working. syslog-ng version: # syslog-ng --version syslog-ng 3.2.5 This is my nginx log format: log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; syslog-ng source: source nginx { file( "/var/log/nginx/tg-test-3.access.log" follow_freq(1) flags(no-parse) ); }; syslog-ng destination: destination d_sql { sql(type(mysql) host("127.0.0.1") username("syslog") password("superpasswd") database("syslog") table("nginx") columns("remote_addr","remote_user","time_local","request","status","body_bytes_sent","http_ referer","http_user_agent","http_x_forwarded_for") values("$REMOTE_ADDR", "$REMOTE_USER", "$TIME_LOCAL", "$REQUEST", "$STATUS","$BODY_BYTES_SENT", "$HTTP_REFERER", "$HTTP_USER_AGENT", "$HTTP_X_FORWARDED_FOR")); }; MySQL table for testing purposes: CREATE TABLE `nginx` ( `remote_addr` varchar(100) DEFAULT NULL, `remote_user` varchar(100) DEFAULT NULL, `time` varchar(100) DEFAULT NULL, `request` varchar(100) DEFAULT NULL, `status` varchar(100) DEFAULT NULL, `body_bytes_sent` varchar(100) DEFAULT NULL, `http_referer` varchar(100) DEFAULT NULL, `http_user_agent` varchar(100) DEFAULT NULL, `http_x_forwarded_for` varchar(100) DEFAULT NULL, `time_local` text, `datetime` text, `host` text, `program` text, `pid` text, `message` text ) ENGINE=InnoDB DEFAULT CHARSET=latin1 Now first thing that goes wrong is when I restart syslog-ng: # /etc/init.d/syslog-ng restart Stopping syslog-ng: [ OK ] Starting syslog-ng: WARNING: You are using the default values for columns(), indexes() or values(), please specify these explicitly as the default will be dropped in the future; [ OK ] I have tried creating a file destination and it all works fine, and then I have tried replacing my destination with: destination d_sql { sql(type(mysql) host("127.0.0.1") username("syslog") password("kosmodromas") database("syslog") table("nginx") columns("datetime", "host", "program", "pid", "message") values("$R_DATE", "$HOST", "$PROGRAM", "$PID", "$MSGONLY") indexes("datetime", "host", "program", "pid", "message")); }; Which did work and it was writing stuff to mysql, The problem is that I want to write stuff to in exact format as nginx log format is. I assume that I am missing something really simple or I need to do some parsing between source and destination. Any help will be much appreciated :)

    Read the article

  • Please, help writing a MIB

    - by facha
    I have a problem with an snmpwalk query returning snmp variables in a non-uniform way: .1.3.6.1.2.1.10.127.1.3.3.1.2.215 -> Hex-STRING: 24 37 4C 0C 65 0E .1.3.6.1.2.1.10.127.1.3.3.1.2.216 -> Hex-STRING: 24 37 4C 0B A2 DA .1.3.6.1.2.1.10.127.1.3.3.1.2.217 -> STRING: "$7L f:" .1.3.6.1.2.1.10.127.1.3.3.1.2.218 -> STRING: "$7L k2" As you can see, some variables are of a STRING type, others are Hex-STRING. So, I'm trying to write a simple MIB to force them all come out as Hex-STRING. This is where I've gotten so far: TEST-MIB DEFINITIONS ::= BEGIN PhysAddress ::= TEXTUAL-CONVENTION DISPLAY-HINT "1x:" STATUS current SYNTAX OCTET STRING test OBJECT-TYPE SYNTAX PhysAddresss MAX-ACCESS read-only STATUS current ::= { 1 3 6 1 2 1 10 127 1 3 3 1 2 } END However, snmpwalk doesn't seem to notice my textual convention (even though the "test" variable is being recognized). I still get a mixture of STIRNGs and Hex-STRINGs. Could anybody point to where is my mistake? snmpwalk -v2c -cpublic 192.168.1.2 TEST-MIB::test ... TEST-MIB::test.216 = Hex-STRING: 24 37 4C 0B A2 DA TEST-MIB::test.217 = STRING: "$7L f:"

    Read the article

  • compose-key mappings differ between gtk and qt apps

    - by intuited
    I'm noticing that there is an inconsistency in the output of one of the compose-key combos. When I type ( [Compose] . . ) under Chrome, gedit, gnome-terminal, or roxterm I get the character '?'. This is a small raised dot: $ echo -n '?' | xxd 0000000: cb99 .. When I type the same combo under konsole, yakuake, or kate, I get the character '…'. This is an ellipsis: $ echo -n '…' | xxd 0000000: e280 a6 ... This is not a font issue: if I copy-paste the characters from an app using one toolkit to an app using the other, its appearance is maintained. I use a few other combos pretty regularly and they seem to work consistently across toolkits. I think this is a pretty recent phenomenon. I upgraded from Ubuntu 8.10 to 9.10 fairly recently so this might be related. I'm not sure if this will reoccur if I restart X, and I'd rather not find out. Can someone explain how this is possible, and what I can do to resolve it? I'd like to have the ellipsis appear in all apps when that combo is entered.

    Read the article

  • 554 - Sending MTA’s poor reputation

    - by Phil Wilks
    I am running an email server on 77.245.64.44 and have recently started to have problems with remote delivery of emails sent using this server. Only about 5% of recipients are rejecting the emails, but they all share the following common message... Remote host said: 554 Your access to this mail system has been rejected due to the sending MTA's poor reputation. As far as I can tell my server is not on any blacklists, and it is set up correctly (the reverse DNS checks out and so on). I'm not even sure what the "Sending MTA" is, but I assume it's my server. If anyone could shed any light on this I'd really appreciate it! Here's the full bounce message... Could not deliver message to the following recipient(s): Failed Recipient: [email protected] Reason: Remote host said: 554 Your access to this mail system has been rejected due to the sending MTA's poor reputation. If you believe that this failure is in error, please contact the intended recipient via alternate means. -- The header and top 20 lines of the message follows -- Received: from 79-79-156-160.dynamic.dsl.as9105.com [79.79.156.160] by mail.fruityemail.com with SMTP; Thu, 3 Sep 2009 18:15:44 +0100 From: "Phil Wilks" To: Subject: Test Date: Thu, 3 Sep 2009 18:16:10 +0100 Organization: Fruity Solutions Message-ID: MIME-Version: 1.0 Content-Type: multipart/alternative; boundary="----=_NextPart_000_01C2_01CA2CC2.9D9585A0" X-Mailer: Microsoft Office Outlook 12.0 Thread-Index: Acosujo9LId787jBSpS3xifcdmCF5Q== Content-Language: en-gb x-cr-hashedpuzzle: ADYN AzTI BO8c BsNW Cqg/ D10y E0H4 GYjP HZkV Hc9t ICru JPj7 Jd7O Jo7Q JtF2 KVjt;1;YwBoAGEAcgBsAG8AdAB0AGUALgBoAHUAbgB0AC0AZwByAHUAYgBiAGUAQABzAHUAbgBkAGEAeQAtAHQAaQBtAGUAcwAuAGMAbwAuAHUAawA=;Sosha1_v1;7;{F78BB28B-407A-4F86-A12E-7858EB212295};cABoAGkAbABAAGYAcgB1AGkAdAB5AHMAbwBsAHUAdABpAG8AbgBzAC4AYwBvAG0A;Thu, 03 Sep 2009 17:16:08 GMT;VABlAHMAdAA= x-cr-puzzleid: {F78BB28B-407A-4F86-A12E-7858EB212295} This is a multipart message in MIME format. ------=_NextPart_000_01C2_01CA2CC2.9D9585A0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit

    Read the article

  • Postgresql base backup script

    - by Terry Lorber
    I'm using the following script to do a file-level backup of Postgresql. I sometimes see that the last part, to do cleanup after "pgs_backup_stop" is called, hangs while it waits for the last WAL to be created. The REF_FILE to search for is sometimes wrong. I'm also shipping these files to a different machine, every 5 minutes via rsync. What do other people do to safely remove old WAL files? #!/bin/bash PGDATA=/usr/local/pgsql/data WAL_ARCHIVE=/usr/local/pgsql/archives PGBACKUP=/usr/local/pgsqlbackup PSQL=/usr/local/pgsql/bin/psql today=`date +%Y%m%d-%H%M%S` label=base_backup_${today} echo "Executing pg_start_backup with label $label in server ... " CP=`$PSQL -q -Upostgres -d template1 -c "SELECT pg_start_backup('$label');" -P tuples_only -P format=unaligned` RVAL=$? echo "Begin CheckPoint is $CP" if [ ${RVAL} -ne 0 ] then echo "PSQL pg_start_backup failed" exit 1; fi echo "pg_start_backup executed successfully" echo "TAR begins ... " pushd $PGBACKUP tar -cjf pgdata-$today.tar.bz2 --exclude='pg_xlog' $PGDATA/* popd echo "TAR completed" echo "Executing pg_stop_backup in server ... " $PSQL -Upostgres template1 -c "SELECT pg_stop_backup();" if [ $? -ne 0 ] then echo "PSQL pg_stop_backup failed" exit 1; fi echo "pg_stop_backup done successfully" TO_SEARCH="*${CP:0:2}000000${CP:3:2}.00${CP:5}" echo "Check for ${WAL_ARCHIVE}/${TO_SEARCH}.backup" while [ ! -e ${WAL_ARCHIVE}/${TO_SEARCH}.backup ]; do echo "Waiting for ${WAL_ARCHIVE}/${TO_SEARCH}.backup" sleep 1 done REF_FILE="`echo ${WAL_ARCHIVE}/*${CP:0:2}000000${CP:3:2}`" echo "Reference file ${REF_FILE}" # "-not -newer" or "\! -newer" will also return REF_FILE # so you have to grep it out and use xargs; otherwise you # could also use the -delete action find ${WAL_ARCHIVE} -not -newer ${REF_FILE} -type f | grep -v "^${REF_FILE}$" | xargs rm -f REF_FILE="`echo ${PGBACKUP}/pgdata-$today.tar.bz2`" echo "Reference file ${REF_FILE}" find $PGBACKUP -not -newer ${REF_FILE} -type f -name pgdata* | grep -v "^${REF_FILE}$" | xargs rm -f

    Read the article

  • With a username passed to a script, find the user's home directory

    - by Clinton Blackmore
    I am writing a script that gets called when a user logs in and check if a certain folder exists or is a broken symlink. (This is on a Mac OS X system, but the question is purely bash). It is not elegant, and it is not working, but right now it looks like this: #!/bin/bash # Often users have a messed up cache folder -- one that was redirected # but now is just a broken symlink. This script checks to see if # the cache folder is all right, and if not, deletes it # so that the system can recreate it. USERNAME=$3 if [ "$USERNAME" == "" ] ; then echo "This script must be run at login!" >&2 exit 1 fi DIR="~$USERNAME/Library/Caches" cd $DIR || rm $DIR && echo "Removed misdirected Cache folder" && exit 0 echo "Cache folder was fine." The crux of the problem is that the tilde expansion is not working as I'd like. Let us say that I have a user named george, and that his home folder is /a/path/to/georges_home. If, at a shell, I type: cd ~george it takes me to the appropriate directory. If I type: HOME_DIR=~george echo $HOME_DIR It gives me: /a/path/to/georges_home However, if I try to use a variable, it does not work: USERNAME="george" cd ~$USERNAME -bash: cd: ~george: No such file or directory I've tried using quotes and backticks, but can't figure out how to make it expand properly. How do I make this work?

    Read the article

  • Virtual bridged networking with VLAN, could not ping

    - by v.yegy
    I require a virtual network with VLAN be build between two virtual hosts - which can be (lxc/ vbox -ubuntu or win xp). I tried with lxc and vbox with Ubuntu and was finding difficult to make it work without vlan, but was successful with vbox with xp. vbox-xp1 --- br1 ---------------- br2 ---- vbox-xp2 The config is: brctl addbr br1; brctl addbr br2 ifconfig br1 up; ifconfig br2 up stp br1 off; stp br2 off ip link add name br1-br2-l0 type veth peer name br1-br2-l1 sudo brctl addif br1 br1-br2-l0 sudo brctl addif br2 br1-br2-l1 vbox - xp 1 and 2 with network ; bridged and br1 and br2 respectively. The adapter is intel PRO/1000 MT Server and driver installed in guests. Configured IPs and two hosts pinged! VLAN config: ip link add link br1 name br1-2.5 type vlan id 5 brctl addif br2 br1-2.5 create vlan 5 in xp 1 and 2 and assign ip address Ping on with this config does not work. Wireshark trace on interface br1-br2-l1 / br1-2.5 shows that one ping results in ~240 ping packets and each growing by 4 bytes - first one being correct and 60, ping does not reach other host as I see mac is not learnt[arp -a]. -- if br1-2.5 is not configured, I see untagged packets in br1-br2-l1/0, but still not reaching other host as mac is not learnt. if br1-br2-l0/1 is made down, even if br1-2.5 is up, I count not see any packets. I tried with ebtables, but still could not make a correct config to work. -- If any one here are aware of any configuration, please let me know. I need to make a network of switches. Seems I have a very long way. Sorry for a very long question. Thanks and regards, vy

    Read the article

  • concurrency::accelerator_view

    - by Daniel Moth
    Overview We saw previously that accelerator represents a target for our C++ AMP computation or memory allocation and that there is a notion of a default accelerator. We ended that post by introducing how one can obtain accelerator_view objects from an accelerator object through the accelerator class's default_view property and the create_view method. The accelerator_view objects can be thought of as handles to an accelerator. You can also construct an accelerator_view given another accelerator_view (through the copy constructor or the assignment operator overload). Speaking of operator overloading, you can also compare (for equality and inequality) two accelerator_view objects between them to determine if they refer to the same underlying accelerator. We'll see later that when we use concurrency::array objects, the allocation of data takes place on an accelerator at array construction time, so there is a constructor overload that accepts an accelerator_view object. We'll also see later that a new concurrency::parallel_for_each function overload can take an accelerator_view object, so it knows on what target to execute the computation (represented by a lambda that the parallel_for_each also accepts). Beyond normal usage, accelerator_view is a quality of service concept that offers isolation to multiple "consumers" of an accelerator. If in your code you are accessing the accelerator from multiple threads (or, in general, from different parts of your app), then you'll want to create separate accelerator_view objects for each thread. flush, wait, and queuing_mode When you create an accelerator_view via the create_view method of the accelerator, you pass in an option of immediate or deferred, which are the two members of the queuing_mode enum. At any point you can access this value from the queuing_mode property of the accelerator_view. When the queuing_mode value is immediate (which is the default), any commands sent to the device such as kernel invocations and data transfers (e.g. parallel_for_each and copy, as we'll see in future posts), will get submitted as soon as the runtime sees fit (that is the definition of immediate). When the value of queuing_mode is deferred, the commands will be batched up. To send all buffered commands to the device for execution, there is a non-blocking flush method that you can call. If you wish to block until all the commands have been sent, there is a wait method you can call. Deferring is a more advanced scenario aimed at performance gains when you are submitting many device commands and you want to avoid the tiny overhead of flushing/submitting each command separately. Querying information Just like accelerator, accelerator_view exposes the is_debug and version properties. In fact, you can always access the accelerator object from the accelerator property on the accelerator_view class to access the accelerator interface we looked at previously. Interop with D3D (aka DX) In a later post I'll show an example of an app that uses C++ AMP to compute data that is used in pixel shaders. In those scenarios, you can benefit by integrating C++ AMP into your graphics pipeline and one of the building blocks for that is being able to use the same device context from both the compute kernel and the other shaders. You can do that by going from accelerator_view to device context (and vice versa), through part of our interop API in amp.h: *get_device, create_accelerator_view. More on those in a later post. Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • How to mount vfat drive on Linux with ownership other than root?

    - by Norman Ramsey
    I'm running into trouble mounting an iPod on a newly upgraded Debian Squeeze. I suspect either a protocol has changed or I've tickled a bug, which I don't know where to report. I'm trying to mount the iPod so that I have permission to read and write it. But my efforts come to nothing: $ sudo mount -v -t vfat -o uid=32074,gid=6202 /dev/sde2 /mnt /dev/sde2 on /mnt type vfat (rw,uid=32074,gid=6202) $ ls -l /mnt total 80 drwxr-xr-x 2 root root 16384 Jan 1 2000 Calendars drwxr-xr-x 2 root root 16384 Jan 1 2000 Contacts drwxr-xr-x 2 root root 16384 Jan 1 2000 Notes drwxr-xr-x 3 root root 16384 Jun 23 2007 Photos drwxr-xr-x 6 root root 16384 Jun 19 2007 iPod_Control $ sudo umount /mnt $ sudo mount -v -t vfat -o uid=nr,gid=nr /dev/sde2 /mnt /dev/sde2 on /mnt type vfat (rw,uid=32074,gid=6202) $ ls -l /mnt total 80 drwxr-xr-x 2 root root 16384 Jan 1 2000 Calendars drwxr-xr-x 2 root root 16384 Jan 1 2000 Contacts drwxr-xr-x 2 root root 16384 Jan 1 2000 Notes drwxr-xr-x 3 root root 16384 Jun 23 2007 Photos drwxr-xr-x 6 root root 16384 Jun 19 2007 iPod_Control As you see, I've tried both symbolic and numberic IDs, but the files persist in being owned by root (and only writable by root). The IDs are really mine; I've had the UID since 1993. $ id uid=32074(nr) gid=6202(nr) groups=6202(nr),0(root),2(bin),4(adm),... I've put an strace at http://pastebin.com/Xue2u9FZ, and the mount(2) call looks good: mount("/dev/sde2", "/mnt", "vfat", MS_MGC_VAL, "uid=32074,gid=6202") = 0 Finally, here's my kernel version from uname -a: Linux homedog 2.6.32-5-686 #1 SMP Mon Jun 13 04:13:06 UTC 2011 i686 GNU/Linux Does anyone know if I should be doing something different, or If there is a workaround, or If this is a bug, where it should be reported?

    Read the article

  • Windows 7 PC cannot see some LAN PCs, but can access them via path

    - by zoot
    In an office LAN, with Windows 7 Professional workstations and a FreeNAS Samba server, 2 workstations have intermittent problems in browsing for the other workstations, as well as the FreeNAS server. However, so far, it appears that typing in the path to any of the workstations which aren't visible via the "browse" function, works. ie. the machine Workstation7 is not visible while browsing via Windows Explorer, but is accessible if I type \\Workstation7 in the path field. Occasionally the workstations exhibiting these symptoms show errors that their connection to the FreeNAS server has failed and only rebooting resolves the issue. All other workstations on the network use identical Windows 7 Professional installations and never have these problems. I've checked all machines and they're not using Home Groups. All are setup on the same WorkGroup as the FreeNAS server and the network type is set to Work Network. Temporarily disabling the firewall on the workstations with the issue made no difference, so I know this has nothing to do with the firewall settings. Any pointers would be appreciated. Thanks.

    Read the article

  • October 2012 Critical Patch Update and Critical Patch Update for Java SE Released

    - by Eric P. Maurice
    Hi, this is Eric Maurice. Oracle has just released the October 2012 Critical Patch Update and the October 2012 Critical Patch Update for Java SE.  As a reminder, the release of security patches for Java SE continues to be on a different schedule than for other Oracle products due to commitments made to customers prior to the Oracle acquisition of Sun Microsystems.  We do however expect to ultimately bring Java SE in line with the regular Critical Patch Update schedule, thus increasing the frequency of scheduled security releases for Java SE to 4 times a year (as opposed to the current 3 yearly releases).  The schedules for the “normal” Critical Patch Update and the Critical Patch Update for Java SE are posted online on the Critical Patch Updates and Security Alerts page. The October 2012 Critical Patch Update provides a total of 109 new security fixes across a number of product families including: Oracle Database Server, Oracle Fusion Middleware, Oracle E-Business Suite, Supply Chain Products Suite, Oracle PeopleSoft Enterprise, Oracle Customer Relationship Management (CRM), Oracle Industry Applications, Oracle FLEXCUBE, Oracle Sun products suite, Oracle Linux and Virtualization, and Oracle MySQL. Out of these 109 new vulnerabilities, 5 affect Oracle Database Server.  The most severe of these Database vulnerabilities has received a CVSS Base Score of 10.0 on Windows platforms and 7.5 on Linux and Unix platforms.  This vulnerability (CVE-2012-3137) is related to the “Cryptographic flaws in Oracle Database authentication protocol” disclosed at the Ekoparty Conference.  Because of timing considerations (proximity to the release date of the October 2012 Critical Patch Update) and the need to extensively test the fixes for this vulnerability to ensure compatibility across the products stack, the fixes for this vulnerability were not released through a Security Alert, but instead mitigation instructions were provided prior to the release of the fixes in this Critical Patch Update in My Oracle Support Note 1492721.1.  Because of the severity of these vulnerabilities, Oracle recommends that this Critical Patch Update be installed as soon as possible. Another 26 vulnerabilities fixed in this Critical Patch Update affect Oracle Fusion Middleware.  The most severe of these Fusion Middleware vulnerabilities has received a CVSS Base Score of 10.0; it affects Oracle JRockit and is related to Java vulnerabilities fixed in the Critical Patch Update for Java SE.  The Oracle Sun products suite gets 18 new security fixes with this Critical Patch Update.  Note also that Oracle MySQL has received 14 new security fixes; the most severe of these MySQL vulnerabilities has received a CVSS Base Score of 9.0. Today’s Critical Patch Update for Java SE provides 30 new security fixes.  The most severe CVSS Base Score for these Java SE vulnerabilities is 10.0 and this score affects 10 vulnerabilities.  As usual, Oracle reports the most severe CVSS Base Score, and these CVSS 10.0s assume that the user running a Java Applet or Java Web Start application has administrator privileges (as is typical on Windows XP). However, when the user does not run with administrator privileges (as is typical on Solaris and Linux), the corresponding CVSS impact scores for Confidentiality, Integrity, and Availability are "Partial" instead of "Complete", typically lowering the CVSS Base Score to 7.5 denoting that the compromise does not extend to the underlying Operating System.  Also, as is typical in the Critical Patch Update for Java SE, most of the vulnerabilities affect Java and Java FX client deployments only.  Only 2 of the Java SE vulnerabilities fixed in this Critical Patch Update affect client and server deployments of Java SE, and only one affects server deployments of JSSE.  This reflects the fact that Java running on servers operate in a more secure and controlled environment.  As discussed during a number of sessions at JavaOne, Oracle is considering security enhancements for Java in desktop and browser environments.  Finally, note that the Critical Patch Update for Java SE is cumulative, in other words it includes all previously released security fixes, including the fix provided through Security Alert CVE-2012-4681, which was released on August 30, 2012. For More Information: The October 2012 Critical Patch Update advisory is located at http://www.oracle.com/technetwork/topics/security/cpuoct2012-1515893.html The October 2012 Critical Patch Update for Java SE advisory is located at http://www.oracle.com/technetwork/topics/security/javacpuoct2012-1515924.html.  An online video about the importance of keeping up with Java releases and the use of the Java auto update is located at http://medianetwork.oracle.com/video/player/1218969104001 More information about Oracle Software Security Assurance is located at http://www.oracle.com/us/support/assurance/index.html  

    Read the article

  • DNS Server on Fedora 11

    - by Funky Si
    I recently upgraded my Fedora 10 server to Fedora 11 and am getting the following error in my DNS/named config. named[27685]: not insecure resolving 'fedoraproject.org/A/IN: 212.104.130.65#53 This only shows for certain addresses some are resolved fine and I can ping and browse to them fine, while others produce the error above. This is my named.conf file acl trusted-servers { 192.168.1.10; }; options { directory "/var/named"; forwarders {212.104.130.9 ; 212.104.130.65; }; forward only; allow-transfer { 127.0.0.1; }; # dnssec-enable yes; # dnssec-validation yes; # dnssec-lookaside . trust-anchor dlv.isc.org.; }; # Forward Zone for hughes.lan domain zone "funkygoth" IN { type master; file "funkygoth.zone"; allow-transfer { trusted-servers; }; }; # Reverse Zone for hughes.lan domain zone "1.168.192.in-addr.arpa" IN { type master; file "1.168.192.zone"; }; include "/etc/named.dnssec.keys"; include "/etc/pki/dnssec-keys/dlv/dlv.isc.org.conf"; include "/etc/pki/dnssec-keys//named.dnssec.keys"; include "/etc/pki/dnssec-keys//dlv/dlv.isc.org.conf"; Anyone know what I have set wrong here?

    Read the article

  • Mac Port error installing gsoap

    - by Kevin
    Hi All, I have installed Mac Ports V1.8.1 no worries. I ran sudo port -v selfupdate no worries. I ran sudo port install gsoap And get the following error message. --- Computing dependencies for gsoap --- Fetching gsoap --- Attempting to fetch gsoap_2.7.13.tar.gz from http://optusnet.dl.sourceforge.net/gsoap2 --- Verifying checksum(s) for gsoap --- Extracting gsoap --- Applying patches to gsoap --- Configuring gsoap Error: Target org.macports.configure returned: configure failure: shell command " cd "/opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_ports_devel_gsoap/work/gsoap-2.7" && ./configure --prefix=/opt/local --enable-samples " returned error 77 Command output: checking for a BSD-compatible install... /usr/bin/install -c checking whether build environment is sane... yes checking for gawk... no checking for mawk... no checking for nawk... no checking for awk... awk checking whether make sets $(MAKE)... no checking build system type... i386-apple-darwin10.2.0 checking host system type... i386-apple-darwin10.2.0 checking whether make sets $(MAKE)... (cached) no checking for C++ compiler default output file name... configure: error: C++ compiler cannot create executables See `config.log' for more details. Error: Status 1 encountered during processing. Any ideas as to why it is failing. Regards Kevin

    Read the article

  • Event Log: atapi - the device did not respond within the timeout period - Freeze

    - by rjlopes
    Hi, I have a Windows Server 2003 that stops working randomly (displays image on monitor but is completely frozen), all I could found on the event log as causes were an error from atapi and a warning from msas2k3. The event log entries are: Event Type: Error Event Source: atapi Event Category: None Event ID: 9 Date: 22-07-2009 Time: 16:13:33 User: N/A Computer: SERVER Description: The device, \Device\Ide\IdePort0, did not respond within the timeout period. For more information, see Help and Support Center at http : // go.microsoft.com / fwlink / events.asp. Data: 0000: 0f 00 10 00 01 00 64 00 ......d. 0008: 00 00 00 00 09 00 04 c0 .......À 0010: 01 01 00 50 00 00 00 00 ...P.... 0018: f8 06 20 00 00 00 00 00 ø. ..... 0020: 00 00 00 00 00 00 00 00 ........ 0028: 00 00 00 00 01 00 00 00 ........ 0030: 00 00 00 00 07 00 00 00 ........ Event Type: Warning Event Source: msas2k3 Event Category: None Event ID: 129 Date: 22-07-2009 Time: 16:14:23 User: N/A Computer: SERVER Description: Reset to device, \Device\RaidPort0, was issued. For more information, see Help and Support Center at http : // go.microsoft.com / fwlink / events.asp. Data: 0000: 0f 00 10 00 01 00 68 00 ......h. 0008: 00 00 00 00 81 00 04 80 ......? 0010: 04 00 00 00 00 00 00 00 ........ 0018: 00 00 00 00 00 00 00 00 ........ 0020: 00 00 00 00 00 00 00 00 ........ 0028: 00 00 00 00 00 00 00 00 ........ 0030: 01 00 00 00 81 00 04 80 ......? Any hints?

    Read the article

  • How to balance the root domain using NS records?

    - by Patrick McCurley
    I have two load balancers that balance incoming traffic across multiple data centers. These work fine. I can test them out by doing an 'nslookup mydomain.com xIP' I have now taken out DNS services with DYN.com to allow me to manage the DNS Zone file so that typing mydomain.com will ask my load balancers what the IP address is to resolve. Step 1 : the NS record for www. I set up A records (glue) for ns1 & ns2, then the corresponding NS record to delegate the DNS lookup to the balancers instead of DYN.com's nameservers. ns1.mydomain.com A [ip address of load balancer 1] ns2.mydomain.com A [ip address of load balancer 1] www.mydomain.com NS ns1.mydomain.com www.mydomain.com NS ns2.mydomain.com All is well - when I type www.mydomain.com, the requests get delegated to my load balancers who provide the IP address of the endpoint and the connect is made successfully. Step 2 : the NS record for root. This is where I run into problems. I need customers to be able to type 'mydomain.com' (without the www) and ALSO get delegated to the load balancers for the IP address. However - of the research I have done, and through the DYN control panel, it seems to be not allowed to provide an NS record for the root - as this overrides the default NS servers. How can i delegate both the root, and the www. to my load balancers?

    Read the article

  • Time not propagating to machines on Windows domain

    - by rbeier
    We have a two-domain Active Directory forest: ourcompany.com at the root, and prod.ourcompany.com for production servers. Time is propagating properly through the root domain, but servers in the child domain are unable to sync via NTP. So the time on these servers is starting to drift, since they're relying only on the hardware clock. WHen I type "net time" on one of the production servers, I get the following error: Could not locate a time-server. More help is available by typing NET HELPMSG 3912. When I type "w32tm /resync", i get the following: Sending resync command to local computer The computer did not resync because no time data was available. "w32tm /query /source" shows the following: Free-running System Clock We have three domain controllers in the prod.ourcompany.com subdomain (overkill, but the result of a migration - we haven't gotten rid of one of the old ones yet.) To complicate matters, the domain controllers are all virtualized, running on two different physical hosts. But the time on the domain controllers themselves is accurate - the servers that aren't DCs are the ones having problems. Two of the DCs are running Server 2003, including the PDC emulator. The third DC is running Server 2008. (I could move the PDC emulator role to the 2008 machine if that would help.) The non-DC servers are all running Server 2008. All other Active Directory functionality works fine in the production domain - we're only seeing problems with NTP. I can manually sync each machine to the time source (the PDC emulator) by doing the following: net time \\dc1.prod.ourcompany.com /set /y But this is just a one-off, and it doesn't cause automated time syncing to start working. I guess I could create a scheduled task which runs the above command periodically, but I'm hoping there's a better way. Does anyone have any ideas as to why this isn't working, and what we can do to fix it? Thanks for your help, Richard

    Read the article

< Previous Page | 584 585 586 587 588 589 590 591 592 593 594 595  | Next Page >