Search Results

Search found 40479 results on 1620 pages for 'binary files'.

Page 1403/1620 | < Previous Page | 1399 1400 1401 1402 1403 1404 1405 1406 1407 1408 1409 1410  | Next Page >

  • LAMP server VM issues

    - by nullArray
    After getting a recommendation to salvage a wiki by installing a LAMP server, I went on the prowl for a good virtualized one. I used the VMware Player version. Since the windows box has Bonjour, I can, for example, go to http://lamp.local. and it works see the web client. The problem is, I can't ssh to a directory to scp the files I need, mount a usb thumbdrive (usbfs is unsupported) nor get samba working. I can't even update the ubuntu installation, it fails. I've tried bridged, nat and host-only networking settings in VMware Player. Bridged gives me an undefined IP, while the other two each have different IPs. All three settings allow me to access the web config, but none of them give me samba access. Windows usually freezes, then reports that it cannot connect. I'd rather not wipe a box to do a dedicated install, is there I way I can get this VM working, or are there better LAMP VMs out there? This one came already working and set up with VMware Player, so I thought it would be perfect... Thanks,

    Read the article

  • What's up with stat on Macos/Darwin? Or filesystems without names...

    - by Charles Stewart
    In response to a question I asked on SO, Give the mount point of a path, one respondant suggested using stat to get the device name associated with the volume of a given path. This works nicely on Linux, but gives crazy results on Macos 10.4. For my system, df and mount give: cas cas$ df Filesystem 512-blocks Used Avail Capacity Mounted on /dev/disk0s3 58342896 49924456 7906440 86% / devfs 194 194 0 100% /dev fdesc 2 2 0 100% /dev 1024 1024 0 100% /.vol automount -nsl [166] 0 0 0 100% /Network automount -fstab [170] 0 0 0 100% /automount/Servers automount -static [170] 0 0 0 100% /automount/static /dev/disk2s1 163577856 23225520 140352336 14% /Volumes/Snapshot /dev/disk2s2 409404102 5745938 383187960 1% /Volumes/Sparse cas cas$ mount /dev/disk0s3 on / (local, journaled) devfs on /dev (local) fdesc on /dev (union) on /.vol automount -nsl [166] on /Network (automounted) automount -fstab [170] on /automount/Servers (automounted) automount -static [170] on /automount/static (automounted) /dev/disk2s1 on /Volumes/Snapshot (local, nodev, nosuid, journaled) /dev/disk2s2 on /Volumes/Sparse (asynchronous, local, nodev, nosuid) Trying to get the devices from the mount points, though: cas cas$ df | grep -e/ | awk '{print $NF}' | while read line; do echo $line $(stat -f"%Sdr" $line); done / disk0s3r /dev ???r /dev ???r /.vol ???r /Network ???r /automount/Servers ???r /automount/static ???r /Volumes/Snapshot disk2s1r /Volumes/Sparse disk2s2r Here, I'm feeding each of the mount points scraped from df to stat, outputing the results of the "%Sdr" format string, which is supposed to be the device name: Cf. stat(1) man page: The special output specifier S may be used to indicate that the output, if applicable, should be in string format. May be used in combination with: ... dr Display actual device name. What's going on? Is it a bug in stat, or some Darwin VFS weirdness? Postscript Per Andrew McGregor, try passing "%Sd" to stat for more weirdness. It lists some apparently arbitrary subset of files from CWD...

    Read the article

  • Optimum configuration of McAfee for Servers

    - by Wayne Arthurton
    Our corporate standard is McAfee Enterprise, unfortunately this is non-negotiable. On two types of servers I'm responsible for, SQL & Web, we have noticed major performance issues with the corporate standard setup. Max scan time 45sec One policy for all processes Scan ALL files on write, read and open for backup Heuristics: Find unknown programs, trojans and macros Detect unwanted programs Exclude: EVT, LDF, LOG, MDF, VMD, , windows file protection) This of course still causes major slowdowns. IIS .NET recompiles are slow especially with SharePoint, SQL backups and restores, SQL Analysis Services, Integration Services and temp data from them as well. I have looked from time to time, for some best practices on setting up McAfee of SQL & SQL Analysis Service, SQL Integration Service, Visual Studio, Sharepoint, and .NET web servers in general. How do people setup McAfee enterprise on their corporate serves keeping security intact, but affecting performance as minimally as possible? Has anyone run across white papers on these setups? Obviously some are case by case, but there must be some best practices out there somewhere.

    Read the article

  • Deploying concrete5 on nginx

    - by Nithin
    I have a concrete5 site that works 'out of the box' in apache server. However I am having a lot of trouble running it in nginx. The following is the nginx configuration i am using: server { root /home/test/public; index index.php; access_log /home/test/logs/access.log; error_log /home/test/logs/error.log; location / { # First attempt to serve request as file, then # as directory, then fall back to index.html try_files $uri $uri/ index.php; # Uncomment to enable naxsi on this location # include /etc/nginx/naxsi.rules } # pass the PHP scripts to FastCGI server listening on unix socket # location ~ \.php($|/) { fastcgi_pass unix:/tmp/phpfpm.sock; fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param PATH_INFO $fastcgi_path_info; include fastcgi_params; } location ~ /\.ht { deny all; } } I am able to get the homepage but am having problem with the inner pages. The inner pages display an "Access denied". Possibly the rewrite is not working, in effect I think its querying and trying to execute php files directly instead of going through the concrete dispatcher. I am totally lost here. Thank you for your help, in advance.

    Read the article

  • Thin client - cloud machine - to run via iPad, iPhone, most Androids etc

    - by Carl Lindberg
    I'm tired of having a laptop macbook that breaks down or having files that I need to sync via dropbox etc all the time via the machines to different OS installations. It sucks. I want a thin client where I can login on any machine - my iPhone, PC desktop, iPad etc to one running machine. I would like to replace a modernly powerful desktop iMac with a thin client running via my iPad. I will connect the iPad with a keyboard/mouse too so you get the idea. But I want to be able to use some of the Android phones as well (I guess most Android phones today has a good enough performance/resolution etc to run a thin client). Of course it has to be able to have input/output in sound. Printing can be solved by PDF/emailing etc - so no direct communication to the printer ports to USB etc is necessary. Is there such a service today? It should cost somewhere under something like $40/ month. I will run stuff like CPU heavy duty ableton for music production, xCode for making iOS apps, some games etc. And on the thin client also run virtual machines. VM of Ubuntu and Windows.

    Read the article

  • Server 2008 NAT Internet Not Working

    - by Jack
    I'm trying to set up Routing and Remote Access on Windows Server 2008 R2, I have a network connection that I want to share the internet from to another private network. The server has two NICs which are configured as follows: External NIC (Dynamically assigned by ISP) IP:10.175.4.150 Subnet:255.255.192.0 Gateway:10.175.0.1 DNS:10.175.0.1 Internal NIC IP:172.16.254.1 Subnet:255.255.255.0 Gateway:None DNS:None I have set the external NIC to be the public interface and enabled NAT on it in the RRAS MMC and set the internal NIC to be a private interface. I have also set up the DNS forwarding or whatever it is in the NAT section. From a client (IP:172.16.254.2) I can ping the server and access files on it, when I try to browse the web with the default gateway set to the internal NIC ip I end up getting a 404 page which is returned from the ISPs default gateway. I'm guessing it's something to do with the double NAT possibly. Trying to ping the ISPs default gateway from a private network client just times out as does accessing it directly. I've disabled and reconfigured RRAS multiple times and that doesn't seem to have made a difference, so can anyone tell me what I'm doing wrong? Thanks.

    Read the article

  • Custom flash uploader breaks only on Media Temple

    - by LaserWolf
    I've built a flash uploader to upload files up to 100MB using a php backend. It works wonderfully on our dev server, on a hostgator vps, and on one of our clients' servers running Debian. It will not work on our Media Temple (dv)3.5 and I don't know why. The upload will start but will choke after a few seconds with this flash error message: ioerror: [IOErrorEvent type="ioError" bubbles=false cancelable=false eventPhase=2 text="Error #2038: File I/O Error. URL: http://..._upload.php"] The problem seems to be specific to the asynchronous nature of the flash uploader because if I try a straight php upload it works fine. The php.ini settings are set to allow such a large upload as well. Also, I've thoroughly googled flash, 2038, I/O error, etc but have yet to find anything that helped. Here's the weird part though: We work in Seattle. It won't work from the office. It won't work from home. But, while on the phone with MT's support, they were able to upload a file through our flash uploader just fine. I'm not sure where his office was located but I think it was Atlanta. So the problem also seems specific to physical location. Has anyone run into this sort of problem before?

    Read the article

  • Uninstall IIS on Windows 7

    - by CJM
    I've just rebuilt my development machine and installed IIS. I then installed the Web Deployment tool and used this to restore my previously-backed-up websites to the clean machine. Unfortunately the restoration didn't work correctly/fully. I couldn't easily correct the problem, so I decided to uninstall/reinstall IIS and recreate the sites manually. I uninstalled IIS and rebooted, but there was still plenty of stuff left around such as various files in /windows/system32/inetsrv/ which I tried to delete manually (with limited success!). I rebooted again and tried to reinstall IIS - it reported an error (no meaningful message) and requested another reboot. The event log includes the following errors: The World Wide Web Publishing Service (WWW Service) did not register the URL prefix http://*:80/gallery for site 1. The site has been disabled. and Unable to bind to the underlying transport for [::]:80. The IP Listen-Only list may contain a reference to an interface which may not exist on this machine. I'd like to avoid another rebuild. Can I completely remove IIS, such that I can reinstall it from scratch? Or can I 'fix' the current setup so that IIS will reinstall over what is already there?

    Read the article

  • PostgreSQL 8.4 - Tablespace Optimization

    - by FloE
    I'm currently running a PostgreSQL Database with about 1.5 billion rows / 500 GB of data (including indices). There are several schemata: on for the (read only, irregular changes / updates) 'core-model' and one for every user (about 20 persons). The users can access the core and store data in their own schema, so everything is located in one database. The server runs with CentOS and PostgreSQL 8.4 and is used for scientific studies, exploration etc and is running quite well. These days an upgrade of the DB storage hard disks arrive - all with the same performance as the old ones. I'm looking for the best way to distribute the data on these disks. It would be possible to separate frequently used objects (the core-data) from the user schemata, but I'm not sure if this is really worth the effort. It seems to be a much better idea to move the WAL files (pg_xlog directory) to its own partition. http://www.postgresql.org/docs/8.4/static/wal-internals.html What are your opinions? Are there any tablespace- or partitioning-related performance documentations / benchmarks?

    Read the article

  • How long will a USB key with an OS installed on it last?

    - by Xananax
    I've heard numerous times that installing an OS on a USB key is a bad thing to do, as USBs typically have a certain number of writes before dying, and installing an OS on it will wear it out (unless it's used sporadically for rescue purposes). Nonetheless, I am very tempted to install some flavour of Linux (Ubuntu or Arch, I haven't decided yet) on a small, transportable, USB Key. My problem is, although you read a lot that it's "bad", you are never told how bad. How long would it last (provided, say, a pc that is 24/7 on)? A month? A year? Five years? Is there recipes to make it last longer? Is there any reason beside weariness that should prevent me from attempting this? I mean, if it can be calculated, then I could theoretically shield myself by doing regular backups on another key when the deadline gets close (for example). Notes I am not talking of using a USB as a live CD, but actually installing the OS on it.) When I say "USB Key", I refer to the little USBs with a flash memory, not an external USB hard drive. For the curious, my reason is that I work in a lot of different places, on different PCs, and I have a very customized session, with my own WM, my own key bindings, my own scripts, , a selection of plugins for firefox and chrome, etc, and currently I am synchronizing all this through a mix of dropbox, git, and transporting files on USBs, and and it's becoming a chore. It would be much simpler for me to just plug the USB and mount the hard disk of the PC I am using and use it's processing power without actually needing to install any OS on it.

    Read the article

  • How to re-join an AD2003 domain with Samba after deleting the machine account?

    - by Guss
    During some troubleshooting I deleted the machine account for a Linux server running samba from our AD 2003 domain. We are using Kerberos for authentication, and after I deleted the machine account I tried to join the domain again using net ads join -U Administrator But I keep getting Kerberos errors like these: [2009/08/18 16:14:36, 0] libads/kerberos.c:ads_kinit_password(228) kerberos_kinit_password [email protected] failed: Client not found in Kerberos database Failed to join domain: Improperly formed account name It appears as if samba remembers that it once had an account with the AD and keeps trying to reconnect to it, but I want to create a new account from scratch. I tried to delete all the .tdb files I could find as well as everything under /var/cache/samba but to no avail - it still behaves the same. I also tried to create the machine account on the AD side, but then I get a similar error when I try to join, about failure to authenticate with the machine account - it looks like samba tries the previous machine account password and I don't know how to reset it, or even if I could figure out what samba uses - how to set it in the AD. Any help would be greatly appreciated, as at this point the only thing I can think about is to reformat and reinstall the machine, and I would really REALLY love to not do that. Thanks in advance.

    Read the article

  • Increasing Java's heapspace in Tomcat startup script

    - by Ankur
    I want to increase my heap size when using Tomcat. I was told to add this line export CATALINA_OPTS=-Xms16m -Xmx256m; In to the startup.sh script - I did so (at the beginning) but got the error export: 24: -Xmx256m: bad variable name Where am I supposed to add it, am I doing something else wrong? <b>export CATALINA_OPTS=-Xms16m -Xmx256m;</b> # Better OS/400 detection: see Bugzilla 31132 os400=false darwin=false case "`uname`" in CYGWIN*) cygwin=true;; OS400*) os400=true;; Darwin*) darwin=true;; esac # resolve links - $0 may be a softlink PRG="$0" while [ -h "$PRG" ] ; do ls=`ls -ld "$PRG"` link=`expr "$ls" : '.*-> \(.*\)$'` if expr "$link" : '/.*' > /dev/null; then PRG="$link" else PRG=`dirname "$PRG"`/"$link" fi done PRGDIR=`dirname "$PRG"` EXECUTABLE=catalina.sh # Check that target executable exists if $os400; then # -x will Only work on the os400 if the files are: # 1. owned by the user # 2. owned by the PRIMARY group of the user # this will not work if the user belongs in secondary groups eval else if [ ! -x "$PRGDIR"/"$EXECUTABLE" ]; then echo "Cannot find $PRGDIR/$EXECUTABLE" echo "This file is needed to run this program" exit 1 fi fi exec "$PRGDIR"/"$EXECUTABLE" start "$@"

    Read the article

  • Cannot perform a PECL installation

    - by Petrusa
    I have been trying to do a few PECL installations, but all of them return the same type of error. Something related to timezones? Im running RedHat x86_64 es5. Attempting to install geoip-1.0.7: root@server [~]# pecl install geoip-1.0.7 downloading geoip-1.0.7.tgz ... Starting to download geoip-1.0.7.tgz (9,416 bytes) .....done: 9,416 bytes Warning: strtotime(): It is not safe to rely on the system's timezone settings. You are *required* to use the date.timezone setting or the date_default_timezone_set() function. In case you used any of those methods and you are still getting this warning, you most likely misspelled the timezone identifier. We selected 'America/Chicago' for 'CST/-6.0/no DST' instead in PEAR/Validate.php on line 489 Warning: strtotime(): It is not safe to rely on the system's timezone settings. You are *required* to use the date.timezone setting or the date_default_timezone_set() function. In case you used any of those methods and you are still getting this warning, you most likely misspelled the timezone identifier. We selected 'America/Chicago' for 'CST/-6.0/no DST' instead in /usr/local/lib/php/PEAR/Validate.php on line 489 3 source files, building running: phpize Configuring for: PHP Api Version: 20090626 Zend Module Api No: 20090626 Zend Extension Api No: 220090626 building in /var/tmp/pear-build-root/geoip-1.0.7 running: /root/tmp/pear/geoip/configure checking for egrep... grep -E checking for a sed that does not truncate output... /bin/sed checking for cc... cc checking for C compiler default output file name... a.out checking whether the C compiler works... configure: error: cannot run C compiled programs. If you meant to cross compile, use `--host'. See `config.log' for more details. ERROR: `/root/tmp/pear/geoip/configure' failed What is going on? Anyone that could assist please...

    Read the article

  • VMWare Newbie - looking for hardware recommendations and help :) [closed]

    - by Dan
    I am looking for some hardware recommendations on an upcoming virtualization project. We are a small company (80 users - 25 in site 1, 55 in site 2) currently using Windows Server 2003 - no VM servers yet. Our AD is setup where site 1 is the root domain and site 2 is a subdomain/subnet - connected by T1 and VPN for failover. The current DC's also server as file servers, print servers, AntiVirus servers. Email is in the cloud. Additionally then in site 1 we have 3 additional member servers - one running IBM Websphere for a customer specific app, one running Infor PowerLink (no real heavy load) and another that we use for Virtual Studio apps and also runs DirSync for Exchange Online. No heavy workloads on any of these machines really. We also have an AS400 box that we run ERP/CRM software on that site 2 connects to over the WAN link. In site 2 we also have a SQL machine that runs on Win2K server. Database files are not large less than 5 GB. Light to Medium workload on this machine. File servers in each site store less than 500 GB data and probably won't grow to more than 1TB in the next 5 years. I am looking to go to VMWare in both sites and virtualize all servers. What recommendations do you have for server, storage hardware? Is it safe to virtualize all of your DC's? Any help or advice would be greatly appreciated. Thanks.

    Read the article

  • Tool to modify properties/metadata of a PDF? i.e. Change "Title", "Author"? Sony Reader showing som

    - by Chris W. Rea
    I own a Sony Reader PRS-600 ebook reader. I bought a ton of Manning Publications ebooks (DRM-free) recently. Many of the books are PDFs since not all the ones I wanted are available in epub format. The problem: Some of the PDF books I purchased have incorrect or missing metadata. Making things worse, the Sony Reader only displays the "Title" from the PDF metadata when displaying book titles in the reader's collection of books! The Reader doesn't display the filename. So, even though I have a PDF informatively named "Windows PowerShell In Action.pdf", it shows up as "untitled" in the Reader. Imagine how useful the Reader's list of book titles becomes when many are just "untitled" or "unnamed document" ! Yes, it is maddening. So – short of expecting the publisher to fix the files or Sony to add a filename-based list instead, I'm looking for a way to fix the PDF metadata. I can view the metadata with Adobe Reader, but it doesn't permit modification of the properties. Leading to: Question: Is there a tool – free, or cheap – and either for PC or Mac, that can modify the properties / metadata of a DRM-free PDF document? I want to correct "Title" and "Author" fields, specifically.

    Read the article

  • apache 2.4, mod_proxy_fcgi not honouring .htaccess, work around needed

    - by user229874
    I am using apache 2.4.7 with mod_proxy_fcgi for purpose of passing through php to php-fpm (this will be used for shared hosting environment). The htaccess works fine for non php files, but once it hit rewrite rule that proxies through the php requests, the htaccess is ignored. I know why it is happening. The question is: how do I work around it? The question how do I force apache to treat the request to php file as a request to local file, and then proxy it through? I have spent substantial time in researching on this problem, and following "answers" were given as solution: 1) "use apache configuration instead of .htaccess" it is valid solution, but not for shared hosting environment (I am not going to give access to apache configuration to shared hosting customers ;)). 2) "don't use .htaccess, as it has performance/security/other issues", well how else would shared hosting customers control access/url rewriting on their site? Besides if the .htaccess was not a requirement I would simply use nginx. 3) "put rewrite rule for proxy inside of " - this is incorrect, and it does not work. This behaviour appears to be not a bug but a "feature" as per https://issues.apache.org/bugzilla/show_bug.cgi?id=54887

    Read the article

  • Adobe Reader Wants Sensitive Email Details

    - by KDM
    When I run Adobe Reader, it tells me: Either there is no default mail client or the current mail client cannot fulfill the messaging request. Please run Microsoft Outlook and set it as the default mail client. I have a couple of issues with this: 1) It presupposes everyone has Microsoft Office installed. Not all home users have the budget or inclination for this. 2) It presupposes everyone wants Microsoft Outlook to be their default mail client. 3) I have Microsoft Office (incl. Outlook) installed and set as my default mail client. Even if I make it the default mail client from within the Adobe Reader Preferences, that doesn't stop the dialog appearing. 4) I thought I'd give Adobe Reader a new email address in the preferences, just to get it to stop bugging me. I notice, though, that it want's the SMTP and POP addresses and the account password? They have got to be kidding? I just want to view PDF files. How do I get the message to go away without telling Adobe my life story, giving them my mother's maiden name, my favourite movie, my place of birth, the name of my first goldfish and emptying the contents of my wallet for them?

    Read the article

  • How do I properly configure a ZipInstaller .zic file?

    - by Iszi Rory or Isznti
    As of version 1.20, ZipInstaller is supposed to support the use of a configuration file to customize its installation options. Generally, all the options I want to use are available through the dialog so I really haven't bothered with the configuration file until now. The problem now is that certain tools, such as PsTools from Sysinternals, do not properly show their Product Name to ZipInstaller. ZipInstaller's dialog will let you customize the Start Menu folder and Program Files folder, but that still doesn't change the Product Name that it sees for the software. So, instead of having "PsTools" in my Add/Remove Programs, I get "Sysinternals Software". For some things, the situation is even more confusing. For example, the NIST SP 800-53 Reference Database Application gets installed as "FileMaker Pro Runtime". To rectify this, I've tried to use the aforementioned .zic configuration file. As I understand it, it's a basic INI file you create and put in the root of the ZIP file. ZipInstaller is supposed to read that file, and adjust its parameters accordingly. Mine looks like this: [install] ProductName=NIST_SP_800-53 ProductVersion=1.4.1 CompanyName=NIST Description=NIST_SP_800-53 InstallFolder=%zi.ProgramFiles%\%zi.ProductName% StartMenuFolder=%zi.CompanyName%\%zi.ProductName% I've named it `~zipinst~.zic and placed it in the root of the ZIP file, but when I run ZipInstaller it doesn't seem to recognize any of the information I've given it in the .zic file. What might I be doing wrong here?

    Read the article

  • How is Apache still working?

    - by PJ
    Recently, I decided to set up a local development environment for my work projects. I'm a PHP developer, with just enough knowledge of Linux and Apache to break things mightily. To get the local environment looking like my work environment, I had to upgrade PHP. When I did, Apache wouldn't restart. I decided I wanted to start fresh (this is where things went wrong) and that I'd reinstall Apache and PHP using MacPorts. So, I went through and tried to delete all the Apache files. Yup. I ran locate apache2 and deleted any folders that looked important. (I know, I know) Then I did a /usr/libexec/locate.updatedb to make sure everything was up to date. I even restarted my machine, just to make sure. The issue is, http://localhost still works. As does an alias I set up, http://butler. Shouldn't they not work? Now that I'm this far in, are there any tips for how to completely remove Apache so I can start over? Worst case, I have a timemachine backup, so I can always just restore that... Thanks in advance.

    Read the article

  • Postfix 554 <[email protected]>: Relay access denied

    - by Matt
    So i am trying to set postfix up and I am running into some problems.....here is my files vim /etc/postfix/main.cf relayhost = [smtp.gmail.com]:587 smtp_connection_cache_destinations = smtp.gmail.com smtp_sasl_auth_enable=yes smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd smtp_sasl_tls_security_options = noanonymous tls_random_source = dev:/dev/urandom smtp_tls_CAfile= /etc/pki/CA/cacert.pem smtp_tls_security_level = may smtp_tls_scert_verifydepth = 9 append_dot_mydomain = no readme_directory = no myhostname = maggie.deliverypath.com alias_maps = hash:/etc/aliases alias_database = hash:/etc/aliases myorigin = /etc/mailname mydestination = maggie.deliverypath.com, localhost.deliverypath.com, , localhost mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 mailbox_size_limit = 0 recipient_delimiter = + inet_interfaces = all I also have the gmail password info vim /etc/postfix/sasl_passwd gmail-smtp.l.google.com [email protected]:somepass smtp.gmail.com [email protected]:somepass then I try to follow this article and i get this output telnet mail.demoslice.com 25 Trying 67.207.128.80... Connected to www.slicehost.com. Escape character is '^]'. 220 www.slicehost.com ESMTP Postfix (Ubuntu) HELO test.demoslice.com 250 www.slicehost.com MAIL FROM:<[email protected]> 250 Ok RCPT TO:<[email protected]> 554 <[email protected]>: Relay access denied its started service postfix start * Starting Postfix Mail Transport Agent postfix ...done. then the screen gets frozen and i cant do anything....any ideas

    Read the article

  • USB Flash Drive Problem!

    - by Daren
    Here it goes. My friend saved by his works into my flash drive which was detectable/openable but ... The very next day, the drive wouldn't show up in My Computer and Windows gave him error code 43 (Unknown device). I been searching for the next few hours to find a solution as the works inside is important to him. I know he should back up his files but let's not go there. So .... I tried others few systems that once detected his flash drive but the problem still persisted. I don't know whether or not his flash drive is damaged but when plug/un-plugging, there are still sounds coming out though. Tried solutions: [On Vista Home Premium/HIS COME]Uninstalled -- Restarted com -- Re-installed (ERROR 43) [Windows 7/MY COM] Uninstalled --- Restarted com --- Can't install (ERROR 43) It seems that my com (Windows 7) had the lastest drivers already but still can't detect it. Its a Kingston DataTraveller 101 (DT101) 8GB. Could unplugging the flash drive without clicking "Safely Remove Hardware" is the problem? Kindly provide some help. Thank you all.

    Read the article

  • Windows 7 backup keeps trying to backup non-existent file and folders

    - by Ayusman
    My Windows 7 system backup keeps trying to back up 2 non existent file and one folder. I have double checked that these files do not exist. How does windows 7 try to backup that does not exist and then complain and fails the backup? Here is the messages: Backup encountered a problem while backing up file D:\Non library songs\Klub Arabia. Error:(The system cannot find the file specified. (0x80070002)) Backup encountered a problem while backing up file D:\Non library songs\Klub Arabia 2. Error:(The system cannot find the file specified. (0x80070002)) Backup encountered a problem while backing up file D:\Data\FRIENDS. Error:(The system cannot find the file specified. (0x80070002)) These file/folders may have been there at some point but have since been moved. Any idea how to solve this? Does this mean all other content has been backed up successfully? I have windows 7 professional 64bit and I am backing up my Win7 machine to an external hard drive. Ayusman

    Read the article

  • ssh over a tunnel in order to configure auto login

    - by Vihaan Verma
    I m trying to copy the id_rsa.pub key to the server. The server in my case also has a virutal machine called dev which runs on the host machine. I copied the id_rsa.pub key to the host for auto log in using this command. ssh-copy-id -i ~/.ssh/id_rsa.pub vickey@host which worked fine and I can auto log in to host. I also wanted to auto log in to the dev machine. I know I can just copy the contents of authorized_keys from the host machine to the dev machine but I m looking for a command line of doing things. Creating a tunnel seemed like the solution ssh vickey@host -L 2000:dev:22 -N now when I tried ssh-copy-id -i ~/.ssh/id_rsa.pub vickey@localhost -P 2000 the password that worked here was of my local machine , I expected it to ask me password of my dev machine. The above command adds the pub key to the local machine and not to the dev machine. However this commands asks me for the dev password and copies the files. scp -P 2000 vickey@localhost:/home/vickey/trash/vim . vickey@localhost's password: vim 100% 111 0.1KB/s 00:00 How do I do the same with ssh-copy-id ?

    Read the article

  • HTB.init / tc behind NAT

    - by Ben K.
    I have an Ubuntu 10 box that I'm trying to set up as a bandwidth-shaping router. The machine has one WAN interface, eth0 and two LAN interfaces, eth1 and eth2. NAT is configured using MASQUERADE as described at InternetConnectionSharing. I'm mostly concerned with shaping outbound traffic from the LAN interfaces -- in the end, I'd like to end up with a hard 768Kbps limit per-LAN-interface (rather than a limit on eth0 pooled across all interfaces). I installed HTB.init, and riffing on the examples, tried to set this up on eth1 by putting three files into /etc/sysconfig/htb: /etc/sysconfig/htb/eth1 DEFAULT=30 R2Q=100 /etc/sysconfig/htb/eth1-2.root RATE=768Kbps BURST=15k /etc/sysconfig/htb/eth1-2:30.dfl RATE=768Kbps CEIL=788Kbps BURST=15k LEAF=sfq I can /etc/init.d/htb start and /etc/init.d/htb stats and see information that /seems/ to suggest it's working...but when I try pulling a large file via the WAN interface the shaping clearly isn't in effect. Any suggestions? My guess is it has something to do with where the shaping falls in the NAT chain, but I really have no idea where to begin troubleshooting this. ---- Update: Here's my /etc/init.d/htb list output, it seems to make sense -- the default rate for eth1 is 768Kbps? ### eth0: queueing disciplines qdisc htb 1: root refcnt 2 r2q 100 default 30 direct_packets_stat 0 qdisc sfq 30: parent 1:30 limit 127p quantum 1514b perturb 10sec ### eth0: traffic classes class htb 1:2 root rate 768000bit ceil 768000bit burst 1599b cburst 1599b class htb 1:30 parent 1:2 leaf 30: prio 0 rate 6144Kbit ceil 6144Kbit burst 15Kb cburst 1598b ### eth0: filtering rules filter parent 1: protocol ip pref 100 u32 filter parent 1: protocol ip pref 100 u32 fh 800: ht divisor 1 filter parent 1: protocol ip pref 100 u32 fh 800::800 order 2048 key ht 800 bkt 0 flowid 1:30 match 00000000/00000000 at 12 match 00000000/00000000 at 16 ### eth1: queueing disciplines qdisc htb 1: root refcnt 2 r2q 100 default 30 direct_packets_stat 0 qdisc sfq 30: parent 1:30 limit 127p quantum 1514b perturb 10sec ### eth1: traffic classes class htb 1:2 root rate 768000bit ceil 768000bit burst 1599b cburst 1599b class htb 1:30 parent 1:2 leaf 30: prio 0 rate 6144Kbit ceil 6144Kbit burst 15Kb cburst 1598b

    Read the article

  • Automounting Active Directory home drives on a Linux server on login

    - by Ethan
    I've got a Centos 5.7 box authenticating against Active Directory through PBIS Open (the new LikeWise Open), which works well. Now, I'm trying to get the server to automount the user's AD home directory, located at //ad.server.dom/shares/home directories (Yeah, it's a space in the path. I didn't set this up). Each user has a directory in there with the same name as the user. I've tried to get pam_mount working, but it has a series of issues on RedHat and friends, and I can't seem to get that working. The directory does need to be automounted for the server to perform it's role. My reading on automount seems to suggest that there's no way to get it to do it's thing with authentication, though I'm happy to be proved wrong. I've looked at this resource, but it requires version RedHat (thus CentOS) 6 or higher, and newer packages than I have. I can manually (As root) mount the AD directory using the command mount.cifs "//ad.server.dom/Shares/home directories/testuser" /home/local/AD/testuser/nfs_mount/ -o username=testuser and when I log in as testuser, I can see all of the sample files in the nfs_share directory. Any tips towards the right direction would be highly appreciated. This is going to be on a server at a college, so it needs to be fairly stable, and would lead towards more Linux adoption there.

    Read the article

< Previous Page | 1399 1400 1401 1402 1403 1404 1405 1406 1407 1408 1409 1410  | Next Page >