Search Results

Search found 77599 results on 3104 pages for 'test data'.

Page 835/3104 | < Previous Page | 831 832 833 834 835 836 837 838 839 840 841 842  | Next Page >

  • Acer recovery disks not bootable?

    - by user13743
    We got a new Acer laptop with Vista installed at work. As it's getting ready to go out in the field, we wanted to do a burn-in test on it. We made the recovery DVDs before we ran the test. Part of the burn-in was bonnie++, which does a destructive read/write test of the hard drive. The machine passed with flying colors, but after trying to boot to the recovery DVD to being re-installing the system, the machine began to try PXE boot after a while. After doing some googling, it appears these 'recovery' disks expect a certain recovery partition to exist on the hard drive, and are in fact not bootable at all, and are useless in absence of the recovery partition. Is this the case, and is this "The Way Things Are" with all PC manufacturers and Windows Vista+ nowadays? How do I get my hands on actual bootable DVDs? I've emailed Acer support. I see an option on their site to purchase recovery disks, but I have the suspicion that these are the same non-bootable disks that I burned on the new system. Will Acer provide actual boot disks?

    Read the article

  • Seriousness of a "Smart" disk error. How long will it last?

    - by Workshop Alex
    I have an 1 TB data disk and the bios and Windows are reporting a "Smart" error. At least, I get a Smart event but it doesn't indicate how serious the failure could be. My system is about 6 months old, including the disk so the warranty will cover the damage. Unfortunately, I lack a second disk of 1 TB in size which I can use to make a full backup. The most important data on this disk is safe, but there's a lot of work data which can be regenerated but this would cost a lot of time. So I ordered an USB disk of 1 TB which will arrive in three days. By then I can make a full backup of the data and afterwards, it can crash. But will the disk live that long? (Well, I won't use the PC as long as I can't make a backup.) How serious is such a Smart event? I know it's serious enough to have it replaced, but will it live for another week or could it die any moment?Update: I purchased an 1 TB external disk and spent most of the day making a backup of the 1 TB disk. It survived that. I then received a new disk, since it was still under warranty and replaced the hard disk. Then I had to spend most of a day again to put back the backup. I need to send back the faulty disk and now have an additional external disk, which could always be practical. :-) The Smart Error report did not cause any failures on the original disk. I won't advise to ignore these warnings, but the disk still has enough life in it to last a few more days. (Just make sure you have a good back-up.) And oh, the horror of having to make a complete backup such a huge disk. :-) If your data is important, make sure you have something that supports incremental backups and lots of space. (In my case, the data wasn't very important, just practical to have on-disk together.)

    Read the article

  • Use Excel Table Column in ComboBox Input Range property

    - by V7L
    I asked this in StackOverflow and was redirected here. Apologies for redundancy. I have an Excel worksheet with a combo box on Sheet1 that is populated via its Input Range property from a Dynamic Named Range on Sheet2. It works fine and no VBA is required. My data on Sheet2 is actually in an Excel Table (all data is in the XLS file, no external data sources). For clarity, I wanted to use a structured table reference for the combo box's Input Range, but cannot seem to find a syntax that works, e.g. myTable[[#Data],[myColumn3]] I cannot find any indications that the combo box WILL accept structured table references, though I cannot see why it wouldn't. So, two part question: 1. Is is possible to use a table column reference in the combo box input range property (not using VBA) and 2. HOW?

    Read the article

  • Issues Converting Plain Text Into Microsoft Word Bulleted Lists

    - by user787832
    I'm a programmer. I hate status reports. I found a way to live with it. While I am working in my IDE ( Visual Slickedit ) I keep a plain text file open in one of the file/buffer tabs. As I finish things I just jot down a quick note into that file. At the end of the week that becomes my weekly status report. Example entries: The Datatables.net plugin runs very slowly in IE 8 with more than 2,000 records. I changed the way I did the server side code to process the data to make less work for the plugin to get decent performance for the IE 8 users. I made a class to wrap data from the new data collection objects into the legacy data holder objects. This will let the new database code be backward compatible with the legacy code until we can replace it. I found the bug reported by Jane. The software is fine. The database we use for the test site has data that is corrupted in a way it wouldn't be for production site At the end of the month I go back to each weekly *.txt file and paste all of the entries into a MS Word file for a monthly report. I give the monthly report to a liason to the contracting company who has to compile everyone's monthly reports into a single MS Word 2007 document. His problem, soon to be my problem, comes when he highlights my paragraphs like the ones above to put bullets in front of my paragraphs. When he highlights my notes to put bullets in front of them with MS Word 2007, Word rearranges the text a bit and the new line chars/carriage returns stagger the text so the text is no longer in neat chunks. This: I found the bug reported by Jane. The software is fine. The database we use for the test site has data that is corrupted in a way it wouldn't be for production site Becomes This: I found the bug reported by Jane. The software is fine. The database we use for the test site has data that is corrupted in a way it wouldn't be for production site I tried turning word wrap on in my IDE for the text files I put my status notes in. It just puts some kind of newline character in anyway. Searching/Replacing those chars in the text files has the result of destroying the paragraphs. Once my notes are pasted into MS Word, Word automatically translates them into paragraph breaks. Searching/Replacing them there has similar results. Blank lines separating the notes disappears. One big mess. What I would like is to be able to keep adding my status notes to a text file as I am now, but do something different when I paste the notes into MS Word such that my liason can select the text, hit the bulleting command and NOT have the staggered text as shown above. Any ideas? Thanks much in advance Steve

    Read the article

  • sudoers security

    - by jetboy
    I've setup a script to do Subversion updates across two servers - the localhost and a remote server - called by a post-commit hook run by the www-data user. /srv/svn/mysite/hooks/post-commit contains: sudo -u cli /usr/local/bin/svn_deploy /usr/local/bin/svn_deploy is owned by the cli user, and contains: #!/bin/sh svn update /srv/www/mysite ssh cli@remotehost 'svn update /srv/www/mysite' To get this to work I've had to add the following to the sudoers file: www-data ALL = (cli) NOPASSWD: /usr/local/bin/svn_deploy cli ALL = NOEXEC:NOPASSWD: /usr/local/bin/svn_deploy Entries for both www-data and cli were necessary to avoid the error: post commit hook failed: no tty present and no askpass program specified I'm wary of giving any kind of elevated rights to www-data. Is there anything else I should be doing to reduce or eliminate any security risk?

    Read the article

  • ORA- 01157 / Cant connect to database

    - by Tom
    Hi everyone, this is a follow up from this question. Let me start by saying that i am NOT a DBA, so i'm really really lost with this. A few weeks ago, we lost contact with one of our SID'S. All the other services are working, but this one in particular is not. What we got was this message when trying to connect ORA-01033: ORACLE initialization or shutdown in progress An attempt to alter database open ended up in ORA-01157: cannot identify/lock data file 6 - see DBWR trace file ORA-01110: data file 6: '/u01/app/oracle/oradata/xxx/xxx_data.dbf' I tried to shutdown / restart the database, but got this message. Total System Global Area 566231040 bytes Fixed Size 1220604 bytes Variable Size 117440516 bytes Database Buffers 444596224 bytes Redo Buffers 2973696 bytes Database mounted. ORA-01157: cannot identify/lock data file 6 - see DBWR trace file ORA-01110: data file 6: '/u01/app/oracle/oradata/xxx/xxx_data.dbf' When all continued the same, I erased the dbf files (rm xxx_data.dbf xxx_index.dbf), and recreated them using touch xxx_data.dbf. I also tried to recreate the tablespaces using `CREATE TABLESPACE DATA DATAFILE XXX_DATA.DBF` and got Database not open As I said, i don't know how bad this is, or how far i'm from gaining access to my database (well, to this SID at least, the others are working). I would imagine that a last resource would be to throw everything away, and recreating it, but I don't know how to, and I was hoping there's a less destructive solution. Any help will be greatly appreciated . Thanks in advance.

    Read the article

  • Internet Explorer ignoring PHP setcookie() from CentOS server

    - by Hussein Sabbagh
    In the past we used a windows XAMPP server for an internal website. It worked fine but had some intermittent issues and we decided to move to a LAMP server on CentOS. We made the switch today but it turns out Internet Explorer ignores every attempt I make at saving a cookie. There is no underscore in the URL being used... the URL is actually the same as the one the XAMPP server used, where I was able to save cookies without any problems. It really doesn't make any sense to me, all of the code is the same. The only thing to change is the version of PHP and the server OS. The website works on all other browsers except IE. I can't even make a simple setcookie call. On a blank test page I use setcookie("test", "test", time()+36000, "/"); sleep(5); print_r($_COOKIE); and there is nothing there. Our users can't log into the website because of this and I have no idea what the issue is. If anyone can provide any clues or resolutions I would greatly appreciate it. Obviously the easy answer is to not use IE, but that is not an option in this case.

    Read the article

  • How to make an x.509 certificate from a PEM one?

    - by Ken
    I'm trying to test a script, locally, which involves uploading a file using a Java-based program to a FileZilla FTPES server. For the real thing, there is a real certificate on the FZ server, and the upload step (tested alone) seems to work fine. I've installed FileZilla Server on my dev box (so it'll test uploading from localhost to localhost). I don't have a real certificate for it, of course, so I used the "Generate new certificate..." button in FZ. It works fine from an interactive FTPES program (as long as I OK the unknown cert), but from my Java program it throws a javax.net.ssl.SSLHandshakeException ("unable to find valid certification path to requested target"). So how do I tell Java that this certificate is OK with me? (I know there's a way to change the Java program to accept any certificate, but I don't want to go down that route. I want to test it just as it will happen in production, and I don't want to ignore unknown certificates in production.) I found that Java has a program called "keytool" that seems to be for managing this sort of thing, but it complains that the certificate file that FZ generated is not an "x.509" file. A posting from the FZ side said it was "PEM encoded". I have "openssl" here, which looks like it's perfect for converting between certificate formats, but I think my understanding of certificate formats is wrong because I'm not seeing anything obvious. My knowledge of security certificates is a bit shaky, so if my title is stupidly wrong, please help by fixing that. :-)

    Read the article

  • Storing secure keys on Ubuntu web server

    - by Sencha
    I'm running Ubuntu 12.04 Precise with a DUNG (Django, Unix, Nginx & Gunicorn) environment and my app (as well as various config files) is stored in a python virtual environment inside /srv, which the www-data user has access to. The nginx & gunicorn processes are all run as www-data. My web app requires secure credentials which I am storing in an environment.sh file. This file contains various exports and is run using source before the gunicorn processes execute. My concern is the location of the environment.sh file and it's permissions. Will it be okay storing this file inside the /srv folder where the www-data has access to it? Or should it be stored and owned by root somewhere else such as /var/myapp/environment.sh? Also, regarding the www-data user, if any of my web processes (which are run as www-data) are compromised and someone gains access to them, does that mean that the user could potentially read any file on the system, even if they can't write? Including my secure keys?

    Read the article

  • Distributed storage and computing

    - by Tim van Elteren
    Dear Serverfault community, After researching a number of distributed file systems for deployment in a production environment with the main purpose of performing both batch and real-time distributed computing I've identified the following list as potential candidates, mainly on maturity, license and support: Ceph Lustre GlusterFS HDFS FhGFS MooseFS XtreemFS The key properties that our system should exhibit: an open source, liberally licensed, yet production ready, e.g. a mature, reliable, community and commercially supported solution; ability to run on commodity hardware, preferably be designed for it; provide high availability of the data with the most focus on reads; high scalability, so operation over multiple data centres, possibly on a global scale; removal of single points of failure with the use of replication and distribution of (meta-)data, e.g. provide fault-tolerance. The sensitivity points that were identified, and resulted in the following questions, are: transparency to the processing layer / application with respect to data locality, e.g. know where data is physically located on a server level, mainly for resource allocation and fast processing, high performance, how can this be accomplished? Do you from experience know what solutions provide this transparency and to what extent? posix compliance, or conformance, is mentioned on the wiki pages of most of the above listed solutions. The question here mainly is, how relevant is support for the posix standard? Hadoop for example isn't posix compliant by design, what are the pro's and con's? what about the difference between synchronous and asynchronous opeartion of a distributed file system. Though a synchronous distributed file system has the preference because of reliability it also imposes certain limitations with respect to scalability. What would be, from your expertise, the way to go on this? I'm looking forward to your replies. Thanks in advance! :) With kind regards, Tim van Elteren

    Read the article

  • Simple Ubuntu Server - Expanding disk space - adding a new drive LVM, RAID0 existing setup - how?

    - by NightWolf
    I have a 1TB ext4 partition mounted at / with all my data and Ubuntu 11.04 (natty) installed. Now this drive is almost full (I used it as a database server for some processing). RAID0 is ok, I can take a failure (touch wood). But I need a way to grow this partition. I have a new 1TB drive I want to add, however as my Ubuntu boot and all data is on the one partition I'm not sure how I can go about setting up a RAID0 or LVM array without loosing all my data. So the question is how can I extend my existing ext4 partition over two physical drives without losing data? Thanks!

    Read the article

  • Synchronizing Outlook 2010 Calendar with Sharepoint 2010 website Calendar

    - by Henry
    How do I synchronize my Outlook 2010 calendar to SharePoint 2010 website? I am able to synchronize SharePoint Calendar into my Outlook calendar but not able to do Outlook Calendar (meetings, Data) into SharePoint Calendar. When my office people go into our intranet site, I just want them to see my calendar in SharePoint which updates the data from Outlook and displays it but this calendar data on SharePoint should only be read-only to other users.

    Read the article

  • How do I make the first row of an Excel chart be treated as a heading when it's a number?

    - by Andrew Grimm
    Given a data sample like Prisoner 24601 0.50 Day 1 80 90 Day 2 81 89 Day 3 82 90 Day 4 81 91 What's the easiest way to tell Excel that 24601 and 0.50 are data series names rather than Y axis values when creating a line chart? Approaches I'm aware of: Turn Prisoner numbers into text by having ="24601" and ="0.50" Only select rows 2 onwards as data, and then add in the labels once the graph has been created? Approaches that don't appear to work: Ask Excel to format the first row's numbers as text.

    Read the article

  • Firewire 800 with Windows

    - by Amitesh
    I have two windows machine with windows server 2003 installed on them. I am running a Lab View script on 1 machine and storing the data. But since it has less memory, I want to transfer data to another machine using firewire 800. Is it possible to configure out the second machine just as an external HDD attached to it and write data directly to it? (This is possible with MACs) Dont want to use the ethernet (internet/ TCP/IP prot.) to transfer the data. Is it possible? Thanks.

    Read the article

  • DNS resolution problems; dig SERVFAIL error

    - by JustinP
    I'm setting up a couple of dedicated servers, and having problems setting up my nameservers properly. One of these is a LEMP server (LAMP with nginx in place of Apache), and the other will function solely as an email server, running exim/dovecot/ASSP antispam (no Apache). The LEMP server is CentOS 5.5, with no control panel, while the email server is CentOS 5.5 as well, with cPanel/WHM. So, I've had problems getting DNS set up properly. I have two domains, each one pointing to one of these servers. The nameservers are registered correctly with the domain registrar, and the nameserver IPs are entered correctly as well. I've spoken to tech support at the registrar and they confirm that everything is set up on their end. Not knowing much about DNS, I googled nameservers and DNS until I nearly went blind, and spent hours messing with the configuration. Eventually, I got the LEMP server's DNS working properly (no cPanel). Pleased with this triumph, I'm trying to mimic that configuration and repeat the process with the email server, and it's just not happening. The nameserver starts and stops, but the domain doesn't resolve. Things I have tried Going through standard procedures to set up DNS in WHM Clearing all DNS information, uninstalling BIND, then reinstalling all of that and again going through WHM procedures for setting up DNS Clearing all DNS information, and setting up BIND via shell (completely outside of cPanel) by using my config and zone files from the LEMP server as a template named runs just fine, but nothing is resolving. When I "dig any example.com" I get a SERVFAIL message. Nslookups return no information. Here are my config and zone files. named.conf controls { inet 127.0.0.1 allow { localhost; } keys { coretext-key; }; }; options { listen-on port 53 { any; }; listen-on-v6 port 53 { ::1; }; directory "/var/named"; dump-file "/var/named/data/cache_dump.db"; statistics-file "/var/named/data/named_stats.txt"; memstatistics-file "/var/named/data/named_mem_stats.txt"; // Those options should be used carefully because they disable port // randomization // query-source port 53; // query-source-v6 port 53; allow-query { any; }; allow-query-cache { any; }; }; logging { channel default_debug { file "data/named.run"; severity dynamic; }; }; view "localhost_resolver" { match-clients { 127.0.0.0/24; }; match-destinations { localhost; }; recursion yes; //zone "." IN { // type hint; // file "/var/named/named.ca"; //}; include "/etc/named.rfc1912.zones"; }; view "internal" { /* This view will contain zones you want to serve only to "internal" clients that connect via your directly attached LAN interfaces - "localnets" . */ match-clients { localnets; }; match-destinations { localnets; }; recursion yes; zone "." IN { type hint; file "/var/named/named.ca"; }; // include "/var/named/named.rfc1912.zones"; // you should not serve your rfc1912 names to non-localhost clients. // These are your "authoritative" internal zones, and would probably // also be included in the "localhost_resolver" view above : zone "example.com" { type master; file "data/db.example.com"; }; zone "3.2.1.in-addr.arpa" { type master; file "data/db.1.2.3"; }; }; view "external" { /* This view will contain zones you want to serve only to "external" clients * that have addresses that are not on your directly attached LAN interface subnets: */ match-clients { any; }; match-destinations { any; }; recursion no; // you'd probably want to deny recursion to external clients, so you don't // end up providing free DNS service to all takers allow-query-cache { none; }; // Disable lookups for any cached data and root hints // all views must contain the root hints zone: //include "/etc/named.rfc1912.zones"; zone "." IN { type hint; file "/var/named/named.ca"; }; zone "example.com" { type master; file "data/db.example.com"; }; zone "3.2.1.in-addr.arpa" { type master; file "data/db.1.2.3"; }; }; include "/etc/rndc.key"; db.example.com $TTL 1D ; ; Zone file for example.com ; ; Mandatory minimum for a working domain ; @ IN SOA ns1.example.com. contact.example.com. ( 2011042905 ; serial 8H ; refresh 2H ; retry 4W ; expire 1D ; default_ttl ) NS ns1.example.com. NS ns2.example.com. ns1 A 1.2.3.4 ns2 A 1.2.3.5 example.com. A 1.2.3.4 localhost A 127.0.0.1 www CNAME example.com. mail CNAME example.com. ; db.1.2.3 $TTL 1D $ORIGIN 3.2.1.in-addr.arpa. @ IN SOA ns1.example.com contact.example.com. ( 2011042908 ; 8H ; 2H ; 4W ; 1D ; ) NS ns1.example.com. NS ns2.example.com. 4 PTR hostname.example.com. 5 PTR hostname.example.com. ; Also of note: both of these servers are managed. Tech support is very responsive, and largely useless. Hours go by with them asking me questions to narrow down what could be wrong, then they pass the ticket to the tech on the next shift, who ignores everything that's happened already and spend his whole shift asking all the same questions the last guy asked. So, in summary: *Nameservers, with IPs, are correctly registered with domain registrar *named is configured and running *...and must not be configured correctly, because nothing resolves. Any help would be great. I changed domains and IPs in the files to generics, but let me know if you need to know the domain in question. Thanks! UPDATE I found that I didn't have 127.0.0.1 in /etc/resolv.conf, so I added it, along with my two public IPs that I have named listening on. resolv.conf search www.example.com example.com nameserver 127.0.0.1 nameserver 7.8.9.10 ;Was in here by default, authoritative nameserver of hosting company nameserver 1.2.3.4 ;Public IP #1 nameserver 1.2.3.5 ;Public IP #2 Now when I DIG example.com from the host, it resolves. If I try to DIG from my other server (in the same datacenter), or from the internet, it times out or I get SERVFAIL.

    Read the article

  • How to move S3 bucket to different location

    - by skrat
    We use S3 for storing millions of entries in our webapp, now we move the whole thing to EC2, EU servers, and we also want to move that S3 data to EU. But the bucket we use is in US, and there seem to be no tool to move whole bucket content to different bucket. There is also problem on how to synchronize the data later on when we switch to EU bucket, the data that will be created meanwhile while the migration was running.

    Read the article

  • GParted tells me my partition has 1.30 GiB used space but I cannot access its contents

    - by reprogrammer
    I've a ext4 partition (/dev/sda7) for my Linux. And, another (/dev/sda5) for keeping my data. When I installing Ubuntu 10.04 LTS, I set the mount point of /dev/sda5 to "/" and that of /dev/sda5 to "/data". GParted tells me that 1.30 GiB out of 70.12 GiB of /dev/sda5 has been used up. But, the mounted directory "/data" is empty. So, it looks like that my data is there but I cannot access it. Besides, when I set the mount point, I didn't check the "format" box. So, it shouldn't have been formatted. How can I check whether the partition has been formatted? How can I recover my files?

    Read the article

  • Does anyone know why rsync would keep sending the files over and over again?

    - by beagleguy
    I'm trying to using rsync to backup some files, about half a TB. It's now it a state where it keeps sending the same files everytime it runs. for example: rsync -av /data/source/* user@host:/data/dest sending incremental file list source/file1.txt source/file2.txt I then verify those files are copied over... then the next time it runs it does the same thing rsync -av /data/source/* user@host:/data/dest sending incremental file list source/file1.txt source/file2.txt any idea why it's getting stuck on these files? I've tried to wipe the whole dest directory out and start over but no luck. thanks,

    Read the article

  • GPO startup script not copying files

    - by marcwenger
    I created a GPO startup script to execute for computers in a specific AD container. The script takes a file from the AD netlogon share and places it on a directory on the computer. Given the right permissions (ie: myself) can execute the script just fine and the file copies. But it doesn't work on startup - the file does not copy over from the AD server. The startup script should run as localsystem (am I right?). So the question is why do the files not copy on startup? Could it be because of: Is it permissions of the local system user? Reading the registry is problematic on startup? Obtaining files from the AD netlogon folder is problematic on startup? Am I missing it completely? My test machine does have the registry key and local directories as described in the script. I myself have standard user permissions on the test machine. AD server is Windows 2008, test client is Windows XP SP3 (and soon to be Windows 7, which I assume permissions issues will be inevitable) Dim wShell, fso, oraHome, tnsHome, key, srcDir Set wShell = WScript.CreateObject("WScript.Shell") Set fso = CreateObject("Scripting.FileSystemObject") key = "HKLM\Software\Oracle\Oracle_Home" On Error Resume Next orahome = wShell.RegRead(key) If err.Number = 0 Then tnsHome = oraHome + "\" + "network\admin\" srcDir = wShell.ExpandEnvironmentStrings("%logonserver%") + "\netlogon\UpdatedFiles\" fso.CopyFile srcDir + "file1.ext", tnsHome, true End If Side note: To ensure that the script is properly deployed, I purposely put some errors in the script, and on the next startup the error message appeared. So I know the GPO is deployed properly.

    Read the article

  • estimating size of reply HTTP headers

    - by Guanidene
    I am trying to download a file via a basic socket connection, using a HTTP GET request. So, I have to specify how many bytes of data coming in I have to read from the socket. However, I am having trouble deciding the amount of data (say, in bytes) the server would reply back with. I know that there is a "Content-Length" field in the reply sent by server, but that gives me the size of the actual data (without the http headers). Is there a way to get the exact size of HTTP headers sent by the server or an estimation is required? (I am doing this for downloading on a mobile network, where every bit of data matters in terms of time and money, so I don't wish to make an unnecessary larger estimate of the header size.)

    Read the article

  • Server HTTP Load times slow?

    - by cdog5000
    Hello, My server @ codemeh.com (HTTP Server) seems to be randomly loading slowly, I cannot tell if it just my forums (http://www.codemeh.com/forums/) that are loading slowly or if the WHOLE site is just loading slowly since my forums are the largest thing on the site right now. load average: 0.02, 0.17, 0.20 That is super low to my knowledge. I have tried Google Page Analytic plug-in for FireFox to solve the problem but nothing comes up that is VERY bad. If someone could investigate this for me since I am very new at apache and server configurations. Thanks! (top): PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 7493 www-data 15 0 98.2m 16m 9092 S 3 0.8 0:27.24 apache2 26429 www-data 15 0 98.2m 15m 7392 S 3 0.7 0:03.45 apache2 26477 www-data 17 0 98.2m 15m 7396 S 3 0.7 0:03.16 apache2 1 root 15 0 2468 1384 1156 S 0 0.1 0:00.49 init 1367 root 25 0 2564 816 660 S 0 0.0 0:00.00 xinetd 1526 root 15 0 29576 5420 1976 S 0 0.3 1:02.69 fail2ban-server 3703 root 15 0 13512 9312 1696 S 0 0.4 0:11.59 miniserv.pl 3915 postfix 15 0 6056 1652 1320 S 0 0.1 0:00.00 pickup 4010 root 15 0 4548 1296 972 S 0 0.1 0:37.36 ntpd 7448 root 15 0 98528 26m 20m S 0 1.3 0:00.27 apache2 7454 www-data 18 0 33580 2616 368 S 0 0.1 0:00.04 apache2 7528 www-data 18 0 108m 24m 15m S 0 1.2 0:27.60 apache2 7974 root 16 0 8700 2728 2164 S 0 0.1 0:00.08 sshd 8123 cdog5000 15 0 8832 1596 896 S 0 0.1 0:00.00 sshd 8126 cdog5000 18 0 4484 1716 1384 S 0 0.1 0:00.00 bash 8141 cdog5000 15 0 2344 980 796 R 0 0.0 0:00.11 top 13461 root 15 0 8700 2728 2164 S 0 0.1 0:00.07 sshd 13567 cdog5000 18 0 8832 1492 896 S 0 0.1 0:00.33 sshd 13569 cdog5000 18 0 4484 1728 1388 S 0 0.1 0:00.09 bash 17983 root 15 0 4392 1268 988 S 0 0.1 0:00.00 su 17987 root 15 0 4516 1752 1380 S 0 0.1 0:00.09 bash 18081 www-data 15 0 98.2m 14m 6588 S 0 0.7 0:04.91 apache2 20000 www-data 15 0 98.3m 15m 8040 S 0 0.8 0:02.45 apache2 20019 www-data 15 0 98.2m 14m 6808 S 0 0.7 0:04.97 apache2 30343 root 15 0 3964 1012 764 S 0 0.0 0:00.03 vsftpd 30382 root 15 0 2304 908 716 S 0 0.0 0:00.62 cron 30401 mysql 17 0 141m 17m 5416 S 0 0.9 1:02.20 mysqld 30424 root 15 0 5472 912 504 S 0 0.0 0:00.04 sshd 30473 syslog 15 0 1916 676 536 S 0 0.0 0:01.02 syslogd 30611 amavis 15 0 33872 25m 2292 S 0 1.2 0:03.11 amavisd-new 31890 amavis 18 0 34888 24m 1792 S 0 1.2 0:00.00 amavisd-new 31891 amavis 18 0 34888 24m 1784 S 0 1.2 0:00.00 amavisd-new 32397 clamav 18 0 104m 84m 1272 S 0 4.1 1:06.46 clamd 32563 clamav 15 0 12832 5716 4440 S 0 0.3 0:01.29 freshclam 32573 root 23 0 1892 456 372 S 0 0.0 0:00.00 courierlogger 32575 root 18 0 2096 684 544 S 0 0.0 0:00.01 authdaemond 32583 root 23 0 1892 360 284 S 0 0.0 0:00.00 courierlogger 32584 root 24 0 2000 612 516 S 0 0.0 0:00.00 couriertcpd 32598 root 23 0 1892 360 284 S 0 0.0 0:00.00 courierlogger 32599 root 25 0 2000 612 516 S 0 0.0 0:00.00 couriertcpd 32604 root 18 0 1892 460 372 S 0 0.0 0:00.00 courierlogger 32605 root 18 0 2000 624 532 S 0 0.0 0:00.00 couriertcpd 32607 root 18 0 2308 404 256 S 0 0.0 0:00.02 authdaemond 32608 root 18 0 2096 260 116 S 0 0.0 0:00.03 authdaemond 32609 root 15 0 2308 404 256 S 0 0.0 0:00.03 authdaemond 32610 root 18 0 2096 260 116 S 0 0.0 0:00.02 authdaemond 32612 root 18 0 2308 404 256 S 0 0.0 0:00.02 authdaemond 32621 root 24 0 1892 364 284 S 0 0.0 0:00.00 courierlogger 32622 root 25 0 2000 608 516 S 0 0.0 0:00.00 couriertcpd 32633 root 15 0 105m 936 716 S 0 0.0 0:02.26 nscd 32719 root 16 0 6252 1680 1344 S 0 0.1 0:01.24 master 32738 postfix 15 0 6188 1776 1400 S 0 0.1 0:00.44 qmgr 32758 postfix 15 0 6492 2564 1788 S 0 0.1 0:00.14 tlsmgr (/etc/apache2/sites-available/default): NameVirtualHost * <VirtualHost *> ServerAdmin webmaster@localhost DocumentRoot /var/www/web1/web/ <Directory /var/www/web1/web/> Options Indexes MultiViews AllowOverride None Order allow,deny allow from all </Directory> </VirtualHost> I have fail2ban server and I dont have any firewall at this point and time that I know of. SMF is 2.0 RC4 and apache version is 2.2.14. I run a MySQL server on another box in the same DC (Persistent Connection). I installed eAccelerator today and it didnt help.

    Read the article

  • Can Dovecot IMAP automatically create Maildir folders for new (virtual) users?

    - by user233441
    everyone. I am learning to set up a dovecot home IMAP server using a virtual Ubuntu 12.04 machine. My intention is eventually to have a home server that uses POP3 to take email from several addresses and remove them from my ISP's servers, while making them accessible through a home IMAP server (this is similar to the setup described at https://help.ubuntu.com/community/POP3Aggregator, which explains how to set up the system with dovecot version 1, and is thus outdated). I intend to use the ISP's server directly when sending messages, and to BCC all sent messages to myself. I've completed the basic set up of the test server: getmail uses POP3 to fetch messages from two test email accounts, and successfully delivers them to the respective Maildir-style new folders on the virtual machine. Dovecot then successfully sees these messages. I have two questions: 1) I had to set up new, cur, and tmp folders for both of the test accounts manually to get this setup to work. Is there a way to get dovecot to create these Maildir folders automatically when I create a new virtual user account (e.g., when I add a user and password combination to my dovecot password file), or is it expected that I write a bash script to automate that task? 2) I would welcome any comments you have on how this approach could be improved as I learn to set it up. My motivations with this approach are 1) to enable archiving/storing emails from several hosting providers that impose a cap on server storage, and 2) to give me somewhat greater control over email storage without requiring that I set up and administrate a mail server from scratch (which I'm not yet prepared to do) (this follows the recommendations at https://ssd.eff.org/tech/email). Thank you!

    Read the article

  • OS X Automator empty, blank or null value.

    - by Brian
    I have some data files mostly excel, word and pdf files most of the files have no extension on them. So they are missing the .doc .xls. This data needs to be used in a Windows environment now. I have created automator apps for each of the file types I want to add the ext onto. The problem is it also adds the extension to files that already have an extension. So data.xls becomes data.xls.xls I would like to figure a way to only add the extenion to the files without extension. How do I tell the finder filter that i only want it to return files without extensions. I see how to add a line to filter by extension but I don't know how to let it know I want only blank or null or files without any extensions. Thanks

    Read the article

  • Why do I have only incoming traffic when copying from network source to network destination?

    - by mxp
    I have a VirtualBox VM that is located on a network share of my NAS. When I copy something from the VM's disk onto another of the NAS' network shares (so the data is visible outside the VM), the Windows 7 Task Manager shows only incoming network traffic (yellow graph). It's as if the data was only received but never sent over the network. I verified that the data arrives on the other network share. As I understand it, the data flow looks like this: +-NAS--------+ +-Win7 PC (VM Host)-+ | Share1(VM)-|>---->|-+ | | | | | | | Share2 <---|<----<|-+ | +------------+ +-------------------+ If it was like this, I would see incoming and outgoing traffic, right? What am I not getting here?

    Read the article

  • GIT Website Deployment

    - by Brian
    I am attempting to setup GIT to deploy my project to different locations based on the branch. (I think this is what I want to do anyway). My current setup is this: Local dev machine running Netbeans to make changes. Remote server hosting GIT projects (same server running apache) - 2 subsites exist a test.FQDN.com and a live.FQDN.com What I would like to do is have 1 GIT project (MyProject) and create a new feature branch. Any commits done to the new feature branch would push to test.FQDN.com. Once the features have been tested and then merged into the master branch, it would push to live.FQDN.com. I have looked at GIT's post-receive hooks and was able to use "git checkout -f" command to pull on the test.FQDN.com site however that only pulls the master branch and not the new feature branch. I do not have any funding to use a third party to make this work, and would prefer to stay within GIT but have full root access to the web server if there is a package to install which would help control this. Any suggestions would be great!

    Read the article

< Previous Page | 831 832 833 834 835 836 837 838 839 840 841 842  | Next Page >