Search Results

Search found 74849 results on 2994 pages for 'file folder'.

Page 1667/2994 | < Previous Page | 1663 1664 1665 1666 1667 1668 1669 1670 1671 1672 1673 1674  | Next Page >

  • iSCSI, failover and XenServer

    - by jemmille
    I have an iSCSI fail over implementation setup so if one of my storage units fails the other takes over immediately (it also runs the NFS shares). When fail over occurs, volumes are exported, the IP is switched to the other machine and the targets are reconfigured. The fail over of the storage system itself works just fine. I use NexentaStor for my filer. When I do a test (manual) fail over of my storage the following occurs: Note: I run the admin VM's on NFS and customer based VM's on iSCSI All NFS based VM's remain up and working perfectly through the failover and after All VM 's running on iSCSI eventually report the following: An error about not being able to write to a particular block An error about journaling not working Then the file system goes RO To get the VM's working again I have to do the following: Force shutdown of the "broken" VM's. Detach the iSCSI SR Re-attach the iSCSI SR Boot the VM on a different server (5 in my pool) If I don't boot on a different server I get this error "Internal error: Failure("The VDI <uuid&gt; is already attached in RW mode; it can't be attached in RO mode!")" The only way I have found to fix that error is to reboot the entire server it was running on previously which is obviously a huge pain. Currently multipathing is NOT enabled (but can be and the same thing still occurs). I have edited much of the /etc/iscsid.conf file to work with the timeout settings but to no avail. In short, my storage fails over properly but XenServer does not keep the connection alive. As a thought, the error that shows up in #4 above might be the ultimate cause and fixing that would fix everything? Any help would be appreciated more than you know.

    Read the article

  • Appcrash and possible malware

    - by Chris Lively
    First off, I'm running MS Intune Endpoint Protection. It is completely up to date. On 10/25 @ 11:53PM I came across a site that caused Intune to freak out: Microsoft Antimalware has detected malware or other potentially unwanted software. For more information please see the following: http://go.microsoft.com/fwlink/?linkid=37020&name=Trojan:Win64/Sirefef.B&threatid=2147646729 Name: Trojan:Win64/Sirefef.B ID: 2147646729 Severity: Severe Category: Trojan Path: file:_C:\Windows\System32\consrv.dll Detection Origin: Local machine Detection Type: Concrete Detection Source: Real-Time Protection User: NT AUTHORITY\SYSTEM Process Name: C:\Windows\explorer.exe Signature Version: AV: 1.115.526.0, AS: 1.115.526.0, NIS: 10.7.0.0 Engine Version: AM: 1.1.7801.0, NIS: 2.0.7707.0 I, of course, elected to simply delete the file. Since then my machine has been randomly giving an error about "Host Process for Windows Services" stopped working. There are generally two different pieces of info: Description Faulting Application Path: C:\Windows\System32\svchost.exe Problem signature Problem Event Name: BEX64 Application Name: svchost.exe Application Version: 6.1.7600.16385 Application Timestamp: 4a5bc3c1 Fault Module Name: StackHash_52d4 Fault Module Version: 0.0.0.0 Fault Module Timestamp: 00000000 Exception Offset: 000062bdabe00000 Exception Code: c0000005 Exception Data: 0000000000000008 OS Version: 6.1.7601.2.1.0.256.27 Locale ID: 1033 Additional Information 1: 52d4 Additional Information 2: 52d47b8b925663f9d6437d7892cdf21b Additional Information 3: ed24 Additional Information 4: ed24528f3b69e8539b5c5c2158896d3e and Description Faulting Application Path: C:\Windows\System32\svchost.exe Problem signature Problem Event Name: APPCRASH Application Name: svchost.exe Application Version: 6.1.7600.16385 Application Timestamp: 4a5bc3c1 Fault Module Name: mshtml.dll Fault Module Version: 9.0.8112.16437 Fault Module Timestamp: 4e5f1784 Exception Code: c0000005 Exception Offset: 00000000002ed3c2 OS Version: 6.1.7601.2.1.0.256.27 Locale ID: 1033 Additional Information 1: 3e9e Additional Information 2: 3e9e8b83f6a5f2a25451516023078a83 Additional Information 3: 432a Additional Information 4: 432a0284c502cce3bbb92a3bd555fe65 Intune claims the machine is clean. I've also tried some of the online scanners like trendmicro, all of which claimed the system is clean. Finally, I tried the "sfc /scannow" and it said all was good. I left my machine on after I left last night and there were about 50 of those messages. Ideas on how to proceed?

    Read the article

  • Can't delete C:\Config.Msi\75ce84f.rbf

    - by Hugh Allen
    I can't delete C:\Config.Msi\75ce84f.rbf It's not causing any problems but it's a mystery I'd like to solve, preferably before the next reboot because it's scheduled for deletion then (see pendmoves). it's not readonly, system or hidden it's not in use by another process (according to Process Explorer) the NT security permissions aren't the problem either - I am the owner and have Full Control ; as a double-check, the Effective Permissions tab shows that I have permission to delete. Yet trying to delete the file gives "Access is Denied" from both Explorer and cmd. I can however rename it or move it to another folder on the same drive. I can also read it and Virustotal says it's clean which is what I would expect (it's just a Windows Installer temp file - a copy of some DLL I think). The relevant line from Process Monitor is: 6:52:14.3726983 PM 112 Explorer.EXE SetDispositionInformationFile C:\Config.Msi\75ce84f.rbf CANNOT DELETE Delete: True Write 1232 Background: I'm using XP SP2. I recently repaired my Adobe Reader installation to make it the default browser plugin again instead of Foxit. (there seems to be no UI to do it otherwise?) So the installer did its thing and then asked to reboot. As is my habit when rebooting is inconvenient I declined the offer and ran pendmoves to find out what files the installer had scheduled to move / delete. It wanted to delete two files with .rbf extension (rollback files) located in C:\Config.msi\. (this applies to both even though I've been speaking about one). So I tried to delete them manually and couldn't. Does anyone have any ideas what could be preventing deletion? (and I don't think it's malware even though I'm not running AV at the moment)

    Read the article

  • Problems with apache svn server (403 Forbidden)

    - by mrlanrat
    Iv recently setup a SVN server on my papache webserver. I installed USVN http://www.usvn.fr/ to help manage the repositories from a web interface. When I create a repository and try to import code into it from netbeans i get the following error: org.tigris.subversion.javahl.ClientException: RA layer request failed Server sent unexpected return value (403 Forbidden) in response to PROPFIND request for '/svn/python1' I know i have the username and password correct (and I have tried different users) I have done some research and it seems that it is most likely an Apache svn error. Below is the config file for this virtualhost. <VirtualHost *:80> ServerName svn.domain.com ServerAlias www.svn.domain.com ServerAlias admin.svn.domain.com DocumentRoot /home/mrlanrat/domains/svn.domain.com/usvn/public ErrorLog /var/log/virtualmin/svn.domain.com_error_log CustomLog /var/log/virtualmin/svn.domain.com_access_log combined DirectoryIndex index.html index.htm index.php index.php4 index.php5 <Directory "/home/mrlanrat/domains/svn.domain.com/usvn"> Options +SymLinksIfOwnerMatch AllowOverride All Order allow,deny Allow from all </Directory> <Location /svn/> ErrorDocument 404 default DAV svn Require valid-user SVNParentPath /home/mrlanrat/domains/svn.domain.com/usvn/files/svn SVNListParentPath on AuthType Basic AuthName "USVN" AuthUserFile /home/mrlanrat/domains/svn.domain.com/usvn/files/htpasswd AuthzSVNAccessFile /home/mrlanrat/domains/svn.domain.com/usvn/files/authz </Location> </VirtualHost> Can anyone point out what I may have done wrong and how to fix it? I have tested with changing file permissions and changing the configuration with no luck. Thanks in advance!

    Read the article

  • Indefinite hang when restoring SQL 2005 database on a SQL 2008 server in EC2

    - by erinloy
    I'm trying to restore a 25 GB database backup taken from a Windows 2003/SQL 2005 machine to a Windows 2008/SQL 2008 machine in the Amazon EC2 cloud, using a .bak file and the SQL Management Studio. SQL Management Studio reports the restore reaches 100% complete, and then just hangs indefinitely (24+ hours) using a lot of CPU, until I restart the SQL Server service. Upon restart, SQL again uses a lot of CPU activity for what seems to be an indefinite amount of time, but the DB never comes online. Here are some details: - I have created two EBS volumes, one for DATA and one for LOGS, and I have set the default directories in SQL Server to the \DATA and \LOG directory on these respective volumes. (I wonder if the issue could be related to this, but the DB is too big to restore on the root drive.) - I have given the SQL Server user group full access to these directories. - The server can create a new empty test DB in these directories just fine, and can backup and restore the test DB. - I have tried both restoring of a .bak file and attaching directly to copies of the original .mdf/.ldf files, and the result is the same in both cases. - Both the .bak restore and the .mdf/.ldf attach occur from/to the EBS volumes. - I've also tried the above via SQL script, and "WITH RECOVERY", with no difference in the result, just less UI. - The backup contains two full text indexes. - I have to use "WITH MOVE" for most of the files in the backup. - There's nothing wrong with the backup or .mdf/.ldf files, as this works just fine on a Windows 2003/SQL 2005 machine in the Amazon EC2, but not Windows 2008/SQL 2008. - The DB is NOT marked as "Restoring" in the SQL Management Studio - it is just listed as a normal database, but throws errors when I try to do anything with it (expand the object browser tree, view properties, etc.) Any ideas?

    Read the article

  • Ubuntu 12.04 LDAP SSL self-signed cert not accepted

    - by MaddHacker
    I'm working with Ubuntu 12.04, using OpenLDAP server. I've followed the instructions on the Ubuntu help pages and can happily connect without security. To test my connection, I'm using ldapsearch the command looks like: ldapsearch -xv -H ldap://ldap.[my host].local -b dc=[my domain],dc=local -d8 -ZZ I've also used: ldapsearch -xv -H ldaps://ldap.[my host].local -b dc=[my domain],dc=local -d8 As far as I can tell, I've setup my certificate correctly, but no matter why I try, I can't seem to get ldapsearch to accept my self-signed certificate. So far, I've tried: Updating my /etc/ldap/ldap.conf file to look like: BASE dc=[my domain],dc=local URI ldaps://ldap.[my host].local TLS_CACERT /etc/ssl/certs/cacert.crt TLS_REQCERT allow Updating my /etc/ldap.conf file to look like: base dc=[my domain],dc=local uri ldapi:///ldap.[my host].local uri ldaps:///ldap.[my host].local ldap_version 3 ssl start_tls ssl on tls_checkpeer no TLS_REQCERT allow Updating my /etc/default/slapd to include: SLAPD_SERVICES="ldap:/// ldapi:/// ldaps:///" Several hours of Googling, most of which resulted in adding the TLS_REQCERT allow The exact error I'm seeing is: ldap_initialize( ldap://ldap.[my host].local ) request done: ld 0x20038710 msgid 1 TLS certificate verification: Error, self signed certificate in certificate chain TLS: can't connect. ldap_start_tls: Connect error (-11) additional info: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed After several hours of this, I was hoping someone else has seen this issue, and/or knows how to fix it. Please do let me know if I should add more information, or if you need further data.

    Read the article

  • nginx reload failing: `object version does not match bootstrap parameter`

    - by Jean Jordaan
    I added a server stanza to my virtual.conf, and now nginx seems to have a problem reloading the config. At this point I don't know what exactly is going wrong or how to debug better. Any help would be most appreciated. The config test succeeds: root@server:~# service nginx configtest nginx: the configuration file /etc/nginx/nginx.conf syntax is ok nginx: configuration file /etc/nginx/nginx.conf test is successful I'm tailing the logfile. Upon reload, the following error is logged. As far as I can see, the new config is not used. root@server:~# service nginx reload Reloading nginx: [ OK ] root@server:~# ==> /var/log/nginx/error.log <== nginx object version 0.8.54 does not match bootstrap parameter 1.0.15 at /usr/lib64/perl5/XSLoader.pm line 94. Compilation failed in require. BEGIN failed--compilation aborted. 2012/10/18 12:31:07 [alert] 9620#0: perl_parse() failed: 2 This is the version of nginx I'm running: root@server:~# yum info nginx Loaded plugins: fastestmirror, presto Loading mirror speeds from cached hostfile * base: ftp.udc.es * epel: mirror.nl.leaseweb.net * extras: ftp.udc.es * updates: ftp.cica.es Installed Packages Name : nginx Arch : x86_64 Version : 1.0.15 Release : 2.el6 [...] Server OS: CentOS release 6.3 (Final)

    Read the article

  • Synchronizing audio and video using MP4Box / ffmpeg to concatenate files

    - by jdl2003
    I have two H.264 encoded MPEG-4 files that I need to concatenate. I have been using MP4Box for this task by first ensuring the files are encoded identically (even went so far as to compare output from h264_parse on their video tracks) and then concatenating with this command: MP4Box -cat file1.mp4 -cat file2.mp4 output_file.mp4 This works and the output file is playable, but on playback in Quicktime or VLC the second video's audio starts too soon, making the entire second part of the concatenated file out of sync. I have tried reencoding the output through ffmpeg with -vcodec copy and -acodec copy but the sync issue persists. I have also tried converting to MPEG-2 format first, concatenating with a simple cat file1.mpg file2.mpg > output.mpg and reencoding the result with ffmpeg. This was even worse. I know that I can pass commands to MP4Box to adjust the start time of the audio track, but I am trying to do this programmatically for any input video (in the same encoding of course). I am looking for possible solutions that would not require human intervention / manual adjustments. Or, at least, an understanding of what is happening to make the second part of the concatenated video go out of sync.

    Read the article

  • Crazy problem with Nginx, PHP5-FPM on Ubuntu

    - by Emmanuel
    I've been trying to get a domain from shared hosting to my new VPS. Everything was working just 100% fine, and then all of a sudden rewrites stopped working, pictures that should work started returning 404s. I've got no idea why, but for some reason on my site: http://www.onlythebible.com/ only the home page works, all the other pages depend on rewrites which were working perfectly fine at one stage, but all of a sudden stopped working. Some of the pictures like this url: http://www.onlythebible.com/bgsPreview/Matthew-8.10.jpg which doesn't use a rewrite throws a 404? I almost certain it was nothing to do with the nginx configuration. I've got suspicions that it could be something to do with php5-fpm? The funny thing is, all of a sudden it started working again. And then an hour or so later it broke again and has now gone back to only displaying the home-page and all of the links (and some of the pictures) are just showing 404s. Does anyone have an idea of what the problem might be? I'm pretty new to the whole Linux VPS thing, but this just seems very strange. *edit Here's a line from the error log which might shed some light on the problem: 2011/02/06 03:04:59 [error] 2873#0: *220 open() "/usr/local/nginx/html/bgsPreview/Matthew-8.10.jpg" failed (2: No such file or directory), client: 114.77.115.211, server: onlythebible.com, request: "GET /bgsPreview/Matthew-8.10.jpg HTTP/1.1", host: "www.onlythebible.com", referrer: "http://www.onlythebible.com/" I wonder why it's trying to find the file in /usr/local/nginx/html instead of the proper root which is /var/www/ etc... Oh, and for some reason it's just started working again... for how long I don't know. Another thing that was a bit weird, is that the pages on my website are pulled from a database. But when I edited the database, the pages didn't change... It's almost like they've been cached or something.

    Read the article

  • Problem: Munin Graph

    - by Pablo
    I've been trying to install Munin for 15 days, I looked for information, analized logs, I even deleted and reinstalled Munin using YUM. I'm hosted at Media Temple on a VPS with CentOS. The problem is still there and It's driving me nuts. Graphics are shown as following: http://imageshack.us/photo/my-images/833/capturadepantalla201106u.png/ This is the configuration of my munin.conf file dbdir /var/lib/munin htmldir /var/www/munin logdir /var/log/munin rundir /var/run/munin [localhost] address **.**.***.*** #IP VPS This is the configuration of my munin-node.conf file log_level 4 log_file /var/log/munin/munin-node.log port 4949 pid_file /var/run/munin/munin-node.pid background 1 setseid 1 # Which port to bind to; host * user root group root setsid yes # Regexps for files to ignore ignore_file ~$ ignore_file \.bak$ ignore_file %$ ignore_file \.dpkg-(tmp|new|old|dist)$ ignore_file \.rpm(save|new)$ allow ^127\.0\.0\.1$ Thanks so much, I appreciate all the answers UPDATE munin-graph.log Jun 22 16:30:02 - Starting munin-graph Jun 22 16:30:02 - Processing domain: localhost Jun 22 16:30:02 - Graphed service : open_inodes (0.14 sec * 4) Jun 22 16:30:02 - Graphed service : sendmail_mailtraffic (0.10 sec * 4) Jun 22 16:30:02 - Graphed service : apache_processes (0.12 sec * 4) Jun 22 16:30:02 - Graphed service : entropy (0.10 sec * 4) Jun 22 16:30:02 - Graphed service : sendmail_mailstats (0.14 sec * 4) Jun 22 16:30:02 - Graphed service : processes (0.14 sec * 4) Jun 22 16:30:03 - Graphed service : apache_accesses (0.27 sec * 4) Jun 22 16:30:03 - Graphed service : apache_volume (0.15 sec * 4) Jun 22 16:30:03 - Graphed service : df (0.21 sec * 4) Jun 22 16:30:03 - Graphed service : netstat (0.19 sec * 4) Jun 22 16:30:03 - Graphed service : interrupts (0.14 sec * 4) Jun 22 16:30:03 - Graphed service : swap (0.14 sec * 4) Jun 22 16:30:04 - Graphed service : load (0.11 sec * 4) Jun 22 16:30:04 - Graphed service : sendmail_mailqueue (0.13 sec * 4) Jun 22 16:30:04 - Graphed service : cpu (0.21 sec * 4) Jun 22 16:30:04 - Graphed service : df_inode (0.16 sec * 4) Jun 22 16:30:04 - Graphed service : open_files (0.16 sec * 4) Jun 22 16:30:04 - Graphed service : forks (0.13 sec * 4) Jun 22 16:30:05 - Graphed service : memory (0.26 sec * 4) Jun 22 16:30:05 - Graphed service : nfs_client (0.36 sec * 4) Jun 22 16:30:05 - Graphed service : vmstat (0.10 sec * 4) Jun 22 16:30:05 - Processed node: localhost (3.45 sec) Jun 22 16:30:05 - Processed domain: localhost (3.45 sec) Jun 22 16:30:05 - Munin-graph finished (3.46 sec)

    Read the article

  • Problem booting virtual machine after converting VMDK to VHD

    - by vg1890
    I used the VMWare VCenter Converter Standalone Client to convert a physical drive on my old PC to a virtual drive. The conversion worked fine and I ended up with a valid VMDK file. Next, I wanted to convert the VMDK to a VHD for use with Microsoft Virtual PC, since that's what I use on my new box. I used WinImage for the conversion and that worked fine, too. I can access the files from the virtual drive through WinImage. However, when I create a new virtual machine using Virtual PC and add the existing VHD file, the machine doesn't boot. The initial boot screen flashes with the amount of RAM and then the screen goes black. If I turn off the VM and reboot in safe mode I can see the drivers being loaded until eventually it gets to crcdisk.sys and hangs indefinitely. Any ideas how to fix this? I'm not opposed to starting over from scratch if there's another method to turn my physical machine into a Virtual PC VM. Thanks! EDIT - I should add that the virtual drive is a system boot drive and not a secondary drive. EDIT - I tried booting from the install CD and doing a repair. The result was that the system could not be repaired due to a "driver error."

    Read the article

  • How do I start nginx on port 80 at OS X login?

    - by Bryson
    I installed Nginx using homebrew and after completing the installation the following message was displayed: In the interest of allowing you to run `nginx` without `sudo`, the default port is set to localhost:8080. If you want to host pages on your local machine to the public, you should change that to localhost:80, and run `sudo nginx`. You'll need to turn off any other web servers running port 80, of course. You can start nginx automatically on login running as your user with: mkdir -p ~/Library/LaunchAgents cp #{prefix}/org.nginx.nginx.plist ~/Library/LaunchAgents/ launchctl load -w ~/Library/LaunchAgents/org.nginx.nginx.plist Though note that if running as your user, the launch agent will fail if you try to use a port below 1024 (such as http's default of 80.) But I want Nginx, on port 80, running at login and I don't want to have to open terminal and type in sudo nginx to do it. I want it to load from a plist file like Redis and PostgreSQL do. I moved the plist to /Library/LaunchAgents/ from the user folder equivalent and changed its ownership, also tried setting the user directive in the nginx.conf file and still the same error message in Console.app: nginx: [emerg] bind() to 0.0.0.0:80 failed (13: Permission denied) (along with another message telling me that since nginx was being run without super-user privileges, the user directive was being ignored)

    Read the article

  • Basic 301 Redirection Help

    - by Marc
    I am trying to learn redirection for a WordPress site of my own. I am testing the concept of redirecting a single WordPress post by using a dummy site. However, it doesn't seem to be working for me. I am trying to redirect www.perfectmatchmaker[dot]org/finding-the-right-matchmaker to www.perfectmatchmaker[dot]org/finding-the-perfect-matchmaker I read that using the following is how to do this: Redirect 301 /old.html http://www.you[dot]com/new.html So this is what my .htaccess file currently looks like: # Use PHP5 as default AddHandler application/x-httpd-php5 .php # BEGIN WordPress <IfModule mod_rewrite.c> RewriteEngine On RewriteBase / RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule . /index.php [L] </IfModule> # END WordPress Redirect 301 /finding-the-right-matchmaker.html http://www.perfectmatchmaker.org/finding-the-perfect-matchmaker.html I've also tried removing the ".html". The redirection of the URL is finally working, but the URL shows no posts available. If I try to redirect the other post on the site by adding the following on the next line of the .htaccess file, I get an error that there is a "redirect loop" occurring. redirect 301 /find-love-and-your-perfect-match-through-the-use-of-a-match-maker http://www.perfectmatchmaker.org/find-love-and-your-perfect-match Any help you can provide me would be much appreciated. Thanks! Marc

    Read the article

  • How to set shmall, shmmax, shmni, etc ... in general and for postgresql

    - by jpic
    I've used the documentation from PostgreSQL to set it for example this config: >>> cat /proc/meminfo MemTotal: 16345480 kB MemFree: 1770128 kB Buffers: 382184 kB Cached: 10432632 kB SwapCached: 0 kB Active: 9228324 kB Inactive: 4621264 kB Active(anon): 7019996 kB Inactive(anon): 548528 kB Active(file): 2208328 kB Inactive(file): 4072736 kB Unevictable: 0 kB Mlocked: 0 kB SwapTotal: 0 kB SwapFree: 0 kB Dirty: 3432 kB Writeback: 0 kB AnonPages: 3034588 kB Mapped: 4243720 kB Shmem: 4533752 kB Slab: 481728 kB SReclaimable: 440712 kB SUnreclaim: 41016 kB KernelStack: 1776 kB PageTables: 39208 kB NFS_Unstable: 0 kB Bounce: 0 kB WritebackTmp: 0 kB CommitLimit: 8172740 kB Committed_AS: 14935216 kB VmallocTotal: 34359738367 kB VmallocUsed: 399340 kB VmallocChunk: 34359334908 kB HardwareCorrupted: 0 kB AnonHugePages: 456704 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB DirectMap4k: 12288 kB DirectMap2M: 16680960 kB >>> ipcs -l ------ Shared Memory Limits -------- max number of segments = 4096 max seg size (kbytes) = 4316816 max total shared memory (kbytes) = 4316816 min seg size (bytes) = 1 ------ Semaphore Limits -------- max number of arrays = 128 max semaphores per array = 250 max semaphores system wide = 32000 max ops per semop call = 32 semaphore max value = 32767 ------ Messages Limits -------- max queues system wide = 31918 max size of message (bytes) = 8192 default max size of queue (bytes) = 16384 sysctl.conf extract: kernel.shmall = 1079204 kernel.shmmax = 4420419584 postgresql.conf non defaults: max_connections = 60 # (change requires restart) shared_buffers = 4GB # min 128kB work_mem = 4MB # min 64kB wal_sync_method = open_sync # the default is the first option checkpoint_segments = 16 # in logfile segments, min 1, 16MB each checkpoint_completion_target = 0.9 # checkpoint target duration, 0.0 - 1.0 effective_cache_size = 6GB Is this appropriate ? If not (or not necessarily), in which case would it be appropriate ? We did note nice performance improvements with this config, how would you improve it ? How should kernel memory management parameters be set ? Can anybody explain how to really set them from the ground up ?

    Read the article

  • "Could not claim interface on camera: -6" when trying to connect usb camera (Kinect)

    - by rzetterberg
    I have installed the freenect library from openkinect.org. With that library there is a demo application which you can run from the terminal to test out the Kinect. However when I run this command I get the following output: richard@behemoth:~$ sudo freenect-glview Kinect camera test Number of devices found: 1 Could not claim interface on camera: -6 Could not open device This particular error is thrown by the library libusb by the function libusb_claim_interface and the error -6 corresponds to the LIBUSB_ERROR_BUSY. So my guess is that it has something to do with mounting the usb, rather than specifically the freenect library or the Kinect itself. So my question is how can I find out what resource is using this interface and how can I free it so that I can access it? Edit: What I have tried so far (just to be sure): Rebooted Plugged-out, plugged-in Tried different usb ports Restarted udev Additional information that might be useful: /etc/fstab: # /etc/fstab: static file system information. # # Use 'blkid -o value -s UUID' to print the universally unique identifier # for a device; this may be used with UUID= as a more robust way to name # devices that works even if disks are added and removed. See fstab(5). # # <file system> <mount point> <type> <options> <dump> <pass> proc /proc proc nodev,noexec,nosuid 0 0 # / was on /dev/sda1 during installation UUID=1c73f217-ac8d-451b-8390-7a680628a856 / ext4 errors=remount-ro 0 1 # swap was on /dev/sda5 during installation UUID=bb49bd29-07ec-45a0-bbab-46fb8362b06b none swap sw 0 0 sudo uname -r: Linux behemoth 3.0.0-14-generic-pae #23-Ubuntu SMP Mon Nov 21 22:07:10 UTC 2011 i686 i686 i386 GNU/Linux cat /etc/lsb-release DISTRIB_ID=Ubuntu DISTRIB_RELEASE=11.10 DISTRIB_CODENAME=oneiric DISTRIB_DESCRIPTION="Ubuntu 11.10"

    Read the article

  • How to run multiple instances of Tor?

    - by Ed
    I'm trying to set up a special proxy server (running Windows). It will have several instances of Privoxy and Tor running and my app will choose which Privoxy instance to send HTTP requests to depending on the load. Privoxy will then forward them to Tor. I'm using srvany.exe to create the services. At the moment I'm running 3 Privoxy and 3 Tor services (I copied the binaries to different folders). Each Privoxy service is listening to its own port (8118, 8119, 8120). I can see them listening in a port scanner. This is the application path (for srvany in registry) for the 1st service: C:\Anonymiser\Privoxy 01\privoxy.exe --service I've also configured the Tor services to listen to different ports (9050, 9052, 9054). This is the application path for the 1st service: C:\Anonymiser\Tor 01\tor.exe -f "C:\Anonymiser\Tor 01\torrc" The problem is, when I start the Tor services, only the first service I start is listening to its port. The others aren't listening. They listen if I run them separately. Any ideas what could be wrong? How can I make all 3 services listen on their assigned ports? This is one of my Privoxy configs: confdir . logdir . logfile privoxy.log debug 1 # show each GET/POST/CONNECT request debug 4096 # Startup banner and warnings debug 8192 # Errors - we highly recommended enabling this listen-address localhost:8118 toggle 0 enable-remote-toggle 0 enable-remote-http-toggle 0 enable-edit-actions 1 buffer-limit 4096 forwarded-connect-retries 0 forward-socks4a / localhost:9050 . This is one of my Tor configs: ControlPort 9051 Log notice stdout SocksListenAddress localhost SocksPort 9050 EDIT: Found a workaround. The Tor binary wants a lock on a file in the AppData folder. Because all of them want a lock on the same file, only the first one I start will be working. The workaround is to run each Tor instance under a different account. Not the best solution, but it works.

    Read the article

  • what does the asterisk mean after a filename if you do ls -l

    - by James.Elsey
    I've done an ls -l inside a directory, and my files are displaying like this : james@nevada:~/development/tools/android-sdk-linux_86/tools$ ll total 9512 drwxr-xr-x 3 james james 4096 2010-05-07 19:48 ./ drwxr-xr-x 6 james james 4096 2010-08-21 20:43 ../ -rwxr-xr-x 1 james james 341773 2010-05-07 19:47 adb* -rwxr-xr-x 1 james james 3636 2010-05-07 19:47 android* -rwxr-xr-x 1 james james 2382 2010-05-07 19:47 apkbuilder* -rwxr-xr-x 1 james james 3265 2010-05-07 19:47 ddms* -rwxr-xr-x 1 james james 89032 2010-05-07 19:47 dmtracedump* -rwxr-xr-x 1 james james 1940 2010-05-07 19:47 draw9patch* -rwxr-xr-x 1 james james 6886136 2010-05-07 19:47 emulator* -rwxr-xr-x 1 james james 478199 2010-05-07 19:47 etc1tool* -rwxr-xr-x 1 james james 1987 2010-05-07 19:47 hierarchyviewer* -rwxr-xr-x 1 james james 23044 2010-05-07 19:47 hprof-conv* -rwxr-xr-x 1 james james 1939 2010-05-07 19:47 layoutopt* drwxr-xr-x 4 james james 4096 2010-05-07 19:48 lib/ -rwxr-xr-x 1 james james 16550 2010-05-07 19:47 mksdcard* -rw-r--r-- 1 james james 205851 2010-05-07 19:48 NOTICE.txt -rw-r--r-- 1 james james 33 2010-05-07 19:47 source.properties -rwxr-xr-x 1 james james 1447936 2010-05-07 19:47 sqlite3* -rwxr-xr-x 1 james james 3044 2010-05-07 19:47 traceview* -rwxr-xr-x 1 james james 187965 2010-05-07 19:47 zipalign* What does that asterisk mean? I'm also unable to run a particular file, as follows : james@nevada:~/development/tools/android-sdk-linux_86/tools$ ./emulator bash: ./emulator: No such file or directory EDIT : I'm trying to get Eclipse to use emulator, but it keeps complaining the files does not exist, yet it is here?

    Read the article

  • Ant task to pre-compile JSPs on weblogic server

    - by user24560
    I am trying to create an ant task to compile JSPs. Here are the excerpts from the build.xml related to the task: .... <fileset dir="${java.home}/lib"> <include name="tools.jar"/> </fileset> <java classname="weblogic.jspc" fork="yes"> <classpath refid="weblogic.jsp.classpath" /> <sysproperty key="weblogic.jsp.windows.caseSensitive" value="false"/> <arg line="-forceGeneration -keepgenerated -compileAll -webapp ${jsp.src.dir} -d ${jsp.generated.src.dir}"/> </java> When I try to run wl.jsp.generate task, I get: wl.jsp.generate: [java] [jspc] warning: expected file /WEB-INF/web.xml not found, tag libraries cannot be resolved. [java] [jspc] Overriding default descriptor option 'keepgenerated' with value specified on command-line 'true' [java] Exception encountered while compiling C:\workspace\smcmw\smcmw_browser\jsp\smcesearchprogress.jsp [java] java.lang.NoSuchMethodError: javax.servlet.jsp.tagext.TagAttributeInfo.(Ljava/lang/String;ZLjava/lang/String;ZZLjava/lang/String;ZZLjava/lang/String;Ljava/lang/String;)V [java] at weblogic.jsp.internal.jsp.tag.TagAttrInfoEx.<init>(TagAttrInfoEx.java:64) [java] at weblogic.jsp.internal.jsp.tag.TagAttrInfoEx.<init>(TagAttrInfoEx.java:57) [java] at weblogic.jsp.internal.jsp.tag.TagAttrInfoEx.<init>(TagAttrInfoEx.java:41) [java] at weblogic.jsp.internal.jsp.tag.TagAttrInfoEx.read(TagAttrInfoEx.java:86) Looks like it fails because it can't find WEB-INF/web.xml file and tag libraries. How can I fix this?

    Read the article

  • RoboCopy fails with "the specified network name is no longer available"

    - by Justin Scott
    We have a scheduled task that runs robocopy periodically to mirror a rather large folder structure from one server to another (thousands of folders, 100,000+ files, 50+ GB in size). There is a share on the receiving server where the mirror gets stored. We're running the task from the origin server connecting out to the share on the receiving end. Both servers run Windows Server 2003 and are connected to the same network switch (100Mbps). The process will sometimes complete all the way through without error. More often than not, however, at some point during the process (seems random as to where), robocopy will fail with the error The specified network name is no longer available. It will wait 30 seconds and try the file again and eventually give up after a number of retries. Process will repeat at the next schedule interval and may complete... or not. When this occurs I am not able to access the share at all on the destination server from anywhere on the network for up to 30 minutes. There is nothing else on the network using this share. My question is what does this error mean specifically? Why is the share "dropping off" and becoming inaccessible? Is there a way to prevent it and get the file mirroring to be more stable?

    Read the article

  • CloneZilla PXE Boot Without NFS

    - by John
    I am trying to setup CloneZilla to be bootable via PXE without using NFS. I do not have NFS running on our PXE server and would like to keep it that way. However, most of the information that I have found online indicates that you need to setup NFS in order to PXE boot CloneZilla. I believe that I am pretty close in getting it to work, but am not sure where to go next. Listed below are the different PXE menu option configurations that I have used so far. LABEL Clonezilla Live MENU LABEL Clonezilla Live KERNEL utilities/clonezilla/vmlinuz APPEND initrd=utilities/clonezilla/initrd.img boot=live live-config noswap nolocales edd=on nomodeset ocs_live_run="ocs-live-general" ocs_live_extra_param="" ocs_live_keymap="" ocs_live_batch="no" o$ I have also tried the following append lines, without success: APPEND initrd=utilities/clonezilla/initrd.img boot=live union=aufs noswap noprompt vga=788 fetch=tftp://10.130.155.23/filesystem.squashfs APPEND initrd=utilities/clonezilla/initrd.img boot=live union=aufs noswap noprompt vga=normal nomodeset nosplash fetch=tftp://10.130.155.23/filesystem.squashfs Each of them have resulted in a no go with the following error: "Unable to find a live file system on the network". It looks like it gets to the point of trying to load the filesystem.squashfs file, hangs, and then throws the error. Any help would be greatly appreciated.

    Read the article

  • backupexec 12.5 not following symlinks on linux agent

    - by Peter Carrero
    Ok, we are at a loss here trying to backup a linux box to a backupexec server... we got a backupexec 12.5 server and a "backupexec for windows servers linux agent" (sigh) running on one of our linux boxes. When a backup runs, we get exceptions reported for our symbolic links. it says something like: BACKUP- \\<servername>\[ROOT] File \\<servername>\[ROOT]/<foldername>/<symlink> is in the backup selection list but was not found. Looking at the selection list, the symlink shows as a 1k file on BUE. Tools-Options-Backup has Backup files and directories by following symbolic links/junction points selected. These same checkboxes are selected on the Job Setup-Job Properties-Edit Template-Advanced Additionally, all the checkboxes are checkeced on Tools-Options-Linux, Unix, and Macintosh and on the Job Set-Job Properties-Edit Template-Linux, Unix, and Macintosh. These checkboxes read: "Preserve change time", "Follow local mount points", "Follow remote mount points", "Backup contents of soft-linked directories" and "Lock remote files", but apparently changing those options produce the same result. Any help on how to get BUE to make a proper backup would be greatly appreciated. Thanks.

    Read the article

  • Cygwin vim doesn't load files in runtimepath

    - by durron597
    I created a custom syntax file, and but none of the files in $VIMRUNTIME seem to load. I followed these pieces of the documentation: http://vimdoc.sourceforge.net/htmldoc/filetype.html#new-filetype http://vimdoc.sourceforge.net/htmldoc/syntax.html#mysyntaxfile When I do :echo &runtimepath I get: /home/durron597/.vim,/usr/share/vim/vimfiles,/usr/share/vim/vim73,/usr/share/vim/vimfiles/after,/home/durron597/.vim/after However, if I open a file with vim -D, here's the listed files as I type f: /etc/vimrc /home/durron597/.vimrc /usr/share/vim/vim73/plugin/getscriptPlugin.vim /usr/share/vim/vim73/plugin/gzip.vim /usr/share/vim/vim73/plugin/matchparen.vim /usr/share/vim/vim73/plugin/netrwPlugin.vim /usr/share/vim/vim73/plugin/rrhelper.vim /usr/share/vim/vim73/plugin/spellfile.vim /usr/share/vim/vim73/plugin/tarPlugin.vim /usr/share/vim/vim73/plugin/tohtml.vim /usr/share/vim/vim73/plugin/vimballPlugin.vim /usr/share/vim/vim73/plugin/zipPlugin.vim Here's the output of ls -lR durron597@Durron597 ~/.vim $ ls -lR .: total 0 drwxr-xr-x+ 1 durron597 None 0 Jun 3 11:06 ftdetect drwxr-xr-x+ 1 durron597 None 0 Jun 3 11:06 syntax ./ftdetect: total 1.0K -rw-r--r-- 1 durron597 None 45 Jun 3 11:06 mytype.vim ./syntax: total 4.0K -rw-r--r-- 1 durron597 None 740 Jun 3 11:06 mytype.vim The exact exact paths are: /home/durron597/.vim/ftdetect/mytype.vim /home/durron597/.vim/syntax/mytype.vim Note: the problem is that these files don't seem to be loaded at all, not that these files have internal mistakes Output of :filetype filetype detection:ON plugin:ON indent:OFF Edit 3: No, really, the files are in the right place: $ find /home -name '*.vim' /home/durron597/.vim /home/durron597/.vim/ftdetect/fix.vim /home/durron597/.vim/syntax/fix.vim

    Read the article

  • OpenVPN Bridge LAN-to-LAN Configuration?

    - by Shad Reese
    I'm trying to configure an OpenVPN bridge LAN-to-LAN setup. Currently, I have the OpenVPN bridge Server/Client setup up running. On the server-side my br-lan interface has tap0, eth0, and wlan0 in the bridge group. On the client-side the br-lan interface has eth0 and wlan0 in the bridge group, the client tap0 is outside of the br-lan group. Currently the two bridge groups are connected via the wlanO interfaces (server-side is the Access Point - AP and the client-side is the wireless client). My goal is to connect the two bridge groups with a wireless VPN pipe. My network configuration: Server: br-lan: 10.4.96.50 Client: br-lan: 10.4.96.75 tap0: 10.4.96.100 <---- issued by the VPN server. Unfortunately, I'm stuck with using a bridge instead of a routed OpenVPN setup. My question is how (if possible) do I add the client tap0 interface to the client bridge group, as to ensure all traffic between the server/client bridge groups is using the VPN pipe? SERVER CONFIG FILE. config openvpn sample_server # Set to 1 to enable this instance: option enable 1 option port 1194 option proto udp option dev tap0 option key /etc/easy-rsa/keys/server.key option dh /etc/easy-rsa/keys/dh1024.pem option ifconfig_pool_persist /tmp/ipp.txt option server_bridge "10.4.96.50 255.255.255.0 10.4.96.100 10.4.96.200" list push "redirect-gateway local def1" list push "dhcp-option DNS 10.4.96.14" option duplicate_cn 1 option comp_lzo 1 option max_clients 100 option log /tmp/openvpn.log option verb 3 CLIENT CONFIG FILE: config 'openvpn' 'sample_client' option 'enable' '1' option 'client' '1' option 'dev' 'tap' option 'proto' 'udp' list 'remote' '10.4.96.50 1194' option 'status' /tmp/openvpn-status.log option 'log' /tmp/openvpn.log option 'ca' '/etc/easy-rsa/keys/ca.crt' option 'cert' '/etc/easy-rsa/keys/client.crt' option 'key' '/etc/easy-rsa/keys/client.key' option 'comp_lzo' '1' option 'verb' '5' Thanks in advance,

    Read the article

  • how to get Geo::Coder::Many with cpan?

    - by mnemonic
    Ubuntu is installed for development of a Perl project. aptitude search Geo-Coder i libgeo-coder-googlev3-perl - Perl module providing access to Google Map Aptitude does not refer to Geo::Coder::Many cpan can not build it. sudo cpan Geo::Coder::Many Then: CPAN: Storable loaded ok (v2.27) Going to read '/home/jh/.cpan/Metadata' Database was generated on Wed, 16 Oct 2013 06:17:04 GMT Running install for module 'Geo::Coder::Many' Running make for K/KA/KAORU/Geo-Coder-Many-0.42.tar.gz CPAN: Digest::SHA loaded ok (v5.61) CPAN: Compress::Zlib loaded ok (v2.033) Checksum for /home/jh/.cpan/sources/authors/id/K/KA/KAORU/Geo-Coder-Many-0.42.tar.gz ok CPAN: File::Temp loaded ok (v0.22) CPAN: Parse::CPAN::Meta loaded ok (v1.4401) CPAN: CPAN::Meta loaded ok (v2.110440) CPAN: Module::CoreList loaded ok (v2.49_02) CPAN: Module::Build loaded ok (v0.38) CPAN.pm: Going to build K/KA/KAORU/Geo-Coder-Many-0.42.tar.gz Can't locate Geo/Coder/Many/Google.pm in @INC (@INC contains: /etc/perl /usr/local/lib/perl/5.14.2 /usr/local/share/perl/5.14.2 /usr/lib/perl5 /usr/share/perl5 /usr/lib/perl/5.14 /usr/share/perl/5.14 /usr/local/lib/site_perl .) at /usr/share/perl/5.14/Module/Load.pm line 27. Can't locate Geo/Coder/Many/Google in @INC (@INC contains: /etc/perl /usr/local/lib/perl/5.14.2 /usr/local/share/perl/5.14.2 /usr/lib/perl5 /usr/share/perl5 /usr/lib/perl/5.14 /usr/share/perl/5.14 /usr/local/lib/site_perl .) at /usr/share/perl/5.14/Module/Load.pm line 27. BEGIN failed--compilation aborted at Build.PL line 54. Warning: No success on command[/usr/bin/perl Build.PL --installdirs site] CPAN: YAML loaded ok (v0.77) KAORU/Geo-Coder-Many-0.42.tar.gz /usr/bin/perl Build.PL --installdirs site -- NOT OK Running Build test Make had some problems, won't test Running Build install Make had some problems, won't install Could not read metadata file. Falling back to other methods to determine prerequisites Any suggestions how to resolve this issue?

    Read the article

  • How can a Virtual PC with Win XP install the East Asian Language? (or does any browser come with Chi

    - by Jian Lin
    How can a Virtual PC with Win XP on it install the East Asian Language? (or does any browser come with Chinese fonts?) After setting up a virtual PC with Win XP, if Chinese font is needed, then the usual way is to go to the Control Panel, select "Regional and Language" and go to the second tab and check the box "Install Files for East Asian Languages". After clicking OK, it asks for the file cplexe.exe on the XP SP3 CD 3... and is said to be about 230MB... In such case, how can the language pack or fonts be installed? (Update: I found that it is true for Window 7's Virtual PC with XP on it, as well as the XP SP3 with IE 8 that can be downloaded in the link below.) (I downloaded the virtual hard drive file .vhd from http://www.microsoft.com/downloads/details.aspx?FamilyId=21EABB90-958F-4B64-B5F1-73D0A413C8EF&displaylang=en so there is no "CD 3"... there) Or does any browser come with all the unicode fonts without needing the OS to support it?

    Read the article

< Previous Page | 1663 1664 1665 1666 1667 1668 1669 1670 1671 1672 1673 1674  | Next Page >