Search Results

Search found 24720 results on 989 pages for 'path info'.

Page 283/989 | < Previous Page | 279 280 281 282 283 284 285 286 287 288 289 290  | Next Page >

  • Microphone crackling in Ubuntu 12.04

    - by Tomas
    I am not very experienced in linux and have this problem with microphone which works fine on Windows. Microphone is external one and when listening to recorded sound (both Sound Recorder and Skype Echo Test) you can hear crackling noise. I fixed output crackling by replacing load-module module-hal-detect with load-module module-hal-detect tsched=0 in /etc/pulse/default.pa but I have no idea how to fix input. Hardware info: Card: HDA ATI SB Chip: Realtek ALC272X Thanks for help!

    Read the article

  • Keyword Analysis Tool - Right Method For SEO

    Keyword analysis tool have become a vital element to effective online marketing. This particular tool provides great understanding for the words and phrases people use to search the web for info and products. With that information you can make much better choices in your marketing strategies. They can help you see inside the entire market of online users.

    Read the article

  • After upgrading to trusty, ALSA midi connection (aconnect) doesn't seem to work right

    - by SougonNaTakumi
    Previously in kubuntu 13.10 I was able to open vmpk or plug in a midi keyboard, and provided that TiMidity was running in server mode, I could run aconnect [keyboard port (129:0 for vmpk)] 14:0 aconnect 14:0 128:0 and I could play the keyboard and get sound. But now, a while after upgrading to trusty, I tried to do that, and didn't get any sound. TiMidity itself still plays files fine, but if I try to play them with aplaymidi, I still just get silence. Oddly, the midi files are clearly being read. When I ran (where 130:0 was vmpk's input port) aplaymidi -p 130:0 ~/path/to/midi.mid vmpk was highlighting notes on the piano as if it were playing the midi. One time I tried this, TiMidity (?) very briefly played a fraction of a second of the first chord of my song before everything went silent and vmpk just highlighted the first voice on the keyboard as usual. Now the weirdest part of this is that probably about 40% of the time, when I've played at least one note with either aplaymidi or vmpk, when I run aconnect -x I get a sudden burst of a note or chord from my speakers (that is, if I played one note, I get a note; if I played multiple sequential notes, they turn into a chord), as if the notes were being queued up but not being played and that somehow liberated them. I have no idea what's going on there. A little while ago I remember having a problem with Audacity playing wav files sped up and also locking up if I tried to pause it, which it stopped doing when I set the audio devices to the actual audio devices rather than pulse. But now when I checked again, it's doing the opposite: it won't play audio at all and/or acts weirdly if I don't set the audio devices to pulse, and either way will very occasionally randomly do the speeding up thing regardless. Oddly in the midst of what's looking like a pretty screwed up sound system, sound in VLC and Firefox has been working fine and if I play a wav file with aplay ~/path/to/sound.wav that works fine too. Any idea what I could do to figure out what's wrong with ALSA and/or fix it?

    Read the article

  • PHP include_path doesn't work

    - by 50ndr33
    I have the documents at http://www.example.com/ in /home/www/example.com/www running on Debian Squeeze. /home/www/example.com/ www/ index.php php/ include_me.php In the php.ini I've uncommented and changed to: include_path =".:/home/www/example.com" In a script index.php in www, I have require_once("/php/include_me.php"). The output I am getting from PHP is: Warning: require_once(/php/include_me.php) [function.require-once]: failed to open stream: No such file or directory in /home/www/example.com/www/index.php on line 2 Fatal error: require_once() [function.require]: Failed opening required '/php/include_me.php' (include_path='.:/home/www/example.com') in /home/www/example.com/www/index.php on line 2 As you can see, the include-path is set correctly according to the error. But if I do require_once("../php/include_me.php");, it works. Therefore, something has to be wrong with the include-path. Does anyone know what I can do to fix it?

    Read the article

  • Have apache choose a php version based on the extension in the url, but with a single file on the filesystem

    - by Somejan
    I want to configure a local apache server to serve php files with different php versions. In my document root I have phpinfo.php, now if I go to http://localhost/phpinfo.php4, I want to see the phpinfo.php file processed with php4, if I go to http://localhost/phpinfo.php5 I want to see the same file processed with php5. Note: both php 4 and 5 are already installed side by side, I have no problem configuring apache to treat files that have a .php4 or .php5 extension on the filesystem with the correct php version. What I want is for apache to do the following: If the url-path ends in .php5, serve the file which has a .php extension on the filesystem using the application/x-httpd-php5 handler. If the url-path ends in .php4, serve the same file with the .php extension on the filesystem using the application/x-httpd-php4 handler.

    Read the article

  • Visual Studio 2010 Takes the Cake for Vermont.NETs April 12th Meeting

    Sorry, my cheesy title just could not be helped. Were getting a cake for the Vermont.NET VS2010 launch meeting on April 12th (www.vtdotnet.org for more info). Since there seems to be no high resolution logo available, I created this image for the local bakery to scan and put on the cake....Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • View a pdf with quick webview though apache proxy

    - by Musa
    I have a site(IIS) that is accessed via a proxy in apache(on an IBM i). This site serves PDFs which has quick web view and if I access a pdf directly from the IIS server the PDFs starts to display immediately but if I go through the proxy I have to wait until the entire pdf downloads before I can view it. In the apache config file I use ProxyPass /path/ http://xxx.xxx.xxx.xxx/ <LocationMatch "/path/"> Header set Cache-Control "no-cache" </LocationMatch> I tried adding SetEnv proxy-sendcl to LocationMatch directive this had no effect. The PDFs that view quickly makes a lot of partial requests This is the initial request and response headers GET http://xxx.xxx.xxx.xxx/xxx.PDF HTTP/1.1 Host: xxx.xxx.xxx.xxx Proxy-Connection: keep-alive Cache-Control: no-cache Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8 Pragma: no-cache User-Agent: Mozilla/5.0 (Windows NT 6.2; rv:9.0.1) Gecko/20100101 Firefox/9.0.1 Accept-Encoding: gzip,deflate,sdch Accept-Language: en-US,en;q=0.8 Cookie: chocolatechip HTTP/1.1 200 OK Via: 1.1 xxxxxxxx Connection: Keep-Alive Proxy-Connection: Keep-Alive Content-Length: 15330238 Date: Mon, 25 Aug 2014 12:48:31 GMT Content-Type: application/pdf ETag: "b6262940bbecf1:0" Server: Microsoft-IIS/7.5 Last-Modified: Fri, 22 Aug 2014 13:16:14 GMT Accept-Ranges: bytes X-Powered-By: ASP.NET This is a partial request and response GET http://xxx.xxx.xxx.xxx/xxx.PDF HTTP/1.1 Host: xxx.xxx.xxx.xxx Proxy-Connection: keep-alive Cache-Control: no-cache Pragma: no-cache User-Agent: Mozilla/5.0 (Windows NT 6.2; rv:9.0.1) Gecko/20100101 Firefox/9.0.1 Accept: */* Referer: http://xxx.xxx.xxx.xxx/xxxx.PDF Accept-Encoding: gzip,deflate,sdch Accept-Language: en-US,en;q=0.8 Cookie: chocolatechip Range: bytes=0-32767 HTTP/1.1 206 Partial Content Via: 1.1 xxxxxxxx Connection: Keep-Alive Proxy-Connection: Keep-Alive Content-Length: 32768 Date: Mon, 25 Aug 2014 12:48:31 GMT Content-Range: bytes 0-32767/15330238 Content-Type: application/pdf ETag: "b6262940bbecf1:0" Server: Microsoft-IIS/7.5 Last-Modified: Fri, 22 Aug 2014 13:16:14 GMT Accept-Ranges: bytes X-Powered-By: ASP.NET These are the headers I get if I go through he proxy GET /path/xxx.PDF HTTP/1.1 Host: domain:xxxx Connection: keep-alive Cache-Control: no-cache Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8 Pragma: no-cache User-Agent: Mozilla/5.0 (Windows NT 6.2; rv:9.0.1) Gecko/20100101 Firefox/9.0.1 Accept-Encoding: gzip,deflate,sdch Accept-Language: en-US,en;q=0.8 HTTP/1.1 200 OK Date: Mon, 25 Aug 2014 13:28:42 GMT Server: Microsoft-IIS/7.5 Content-Type: application/pdf Last-Modified: Fri, 22 Aug 2014 13:16:14 GMT Accept-Ranges: bytes ETag: "b6262940bbecf1:0"-gzip X-Powered-By: ASP.NET Cache-Control: no-cache Expires: Thu, 24 Aug 2017 13:28:42 GMT Vary: Accept-Encoding Content-Encoding: gzip Keep-Alive: timeout=300, max=100 Connection: Keep-Alive Transfer-Encoding: chunked I'm guessing its because the proxy uses Transfer-Encoding: chunked but I'm not sure and wasn't able to turn it off to check. Browser Chrome 36.0.1985.143 m Using the native PDF viewer Any help to get the pdf quick web view through the proxy working would be appreciated.

    Read the article

  • Automating silent software deployments on Solaris 10

    - by datSilencer
    Hello everyone. Essentially, the question I'd like to ask is related to the automation of software package deployments on Solaris 10. Specifically, I have a set of software components in tar files that run as daemon processes after being extracted and configured in the host environment. Pretty much like any server side software package out there, I need to ensure that a list of prerequisites are met before extracting and running the software. For example: Checking that certain users exists, and they are associated with one or many user groups. If not, then create them and their group associations. Checking that target application folders exist and if not, then create them with preconfigured path values defined when the package was assembled. Checking that such folders have the appropriate access control level and ownership for a certain user. If not, then set them. Checking that a set of environment variables are defined in /etc/profile, pointed to predefined path locations, added to the general $PATH environment variable, and finally exported into the user's environment. Other files include /etc/services and /etc/system. Obviously, doing this for many boxes (the goal in question) by hand can be slow and error prone. I believe a better alternative is to somehow automate this process. So far I have thought about the following options, and discarded them for one reason or another. 1) Traditional shell scripts. I've only troubleshooted these before, and I don't really have much experience with them. These would be my last resort. 2) Python scripts using the pexpect library for analyzing system command output. This was my initial choice since the target Solaris environments have it installed. However, I want to make sure that I'm not reinveting the wheel again :P. 3) Ant or Gradle scripts. They may be an option since the boxes also have java 1.5 enabled, and the fileset abstractions can be very useful. However, they may fall short when dealing with user and folder permissions checking/setting. It seems obvious to me that I'm not the first person in this situation, but I don't seem to find a utility framework geared towards this purpose. Please let me know if there's a better way to accomplish this. I thank you for your time and help.

    Read the article

  • Incremental backup and sync software

    - by martjno
    I need a free software for Windows (with gui or command line) that does incremental backup copying all files and storing changed or deleted files in a directory named like last change date (or a progressive number). To be more precise: D:\ is my Data drive E:\ is my Backupdrive. If i want to backup all my data from D:\: E:\d_lastbackup\ will contain a plain copy of all the files and folder content (no compression or archiving, same files attributes) of D E:\d_20090822\ will contain all files (with their full path) that are changed or deleted in the last version (since the previous one) E:\d_20090820\ will contain all files (with their full path) that are changed or deleted in the last version (since the previous one) and so on... I had a software working prefectly with an old USB harddsik by Maxtor, but it works only on that device. Any suggestion?

    Read the article

  • The Internet Key Wave MW833UP is not recognized in Ubuntu

    - by gio900
    I can't use my Onda MW833UP... :( Any advice? Here is something that someone else may understand: ~$: lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 11.10 Release: 11.10 Codename: oneiric ~$: lsusb Bus 001 Device 005: ID 1ee8:0012 ~$: dmesg [ 22.709475] cdc_acm 1-1:1.0: ttyACM0: USB ACM device [ 22.714856] usbcore: registered new interface driver cdc_acm [ 22.714866] cdc_acm: USB Abstract Control Model driver for USB modems and ISDN adapters [ 23.520490] ieee80211 phy0: wl_ops_bss_info_changed: arp filtering: enabled true, count 1 (implement) [ 24.244530] usbcore: registered new interface driver usbserial [ 24.244575] USB Serial support registered for generic [ 24.244673] usbcore: registered new interface driver usbserial_generic [ 24.244681] usbserial: USB Serial Driver core [ 24.265879] USB Serial support registered for GSM modem (1-port) [ 24.285680] usbcore: registered new interface driver option [ 24.285691] option: v0.7.2:USB Driver for GSM modems [ 24.425878] EXT4-fs (sda9): re-mounted. Opts: errors=remount-ro,commit=600 [ 24.736540] EXT4-fs (sda8): re-mounted. Opts: commit=600 [ 35.705796] Easy slow down manager: checking for SABI support. [ 35.706002] Easy slow down manager: SABI is supported (f5189) [ 36.060099] usbcore: deregistering interface driver uvcvideo [ 139.508061] CE: hpet increased min_delta_ns to 20113 nsec [ 6798.378917] usb 1-1: USB disconnect, device number 5 [ 6809.108232] usb 1-1: new high speed USB device number 6 using ehci_hcd [ 6809.242692] scsi5 : usb-storage 1-1:1.0 [ 6810.241257] scsi 5:0:0:0: CD-ROM Onda Datacard CD-ROM 0001 PQ: 0 ANSI: 0 [ 6810.241841] scsi 5:0:0:1: Direct-Access Onda Storage 0001 PQ: 0 ANSI: 0 [ 6810.271410] sr0: scsi3-mmc drive: 0x/0x caddy [ 6810.272099] sr 5:0:0:0: Attached scsi CD-ROM sr0 [ 6810.272852] sr 5:0:0:0: Attached scsi generic sg1 type 5 [ 6810.279954] sd 5:0:0:1: [sdb] Attached SCSI removable disk [ 6810.281210] sd 5:0:0:1: Attached scsi generic sg2 type 0 [ 6810.380591] sr0: CDROM (ioctl) error, command: Xpwrite, Read disk info 51 00 00 00 00 00 00 00 02 00 [ 6810.380617] sr: Sense Key : Hardware Error [current] [ 6810.380625] sr: Add. Sense: No additional sense information [ 6810.613937] sr0: CDROM (ioctl) error, command: Xpwrite, Read disk info 51 00 00 00 00 00 00 00 02 00 [ 6810.613972] sr: Sense Key : Hardware Error [current] [ 6810.613984] sr: Add. Sense: No additional sense information [ 6810.673716] usb 1-1: USB disconnect, device number 6 [ 6815.572142] usb 1-1: new high speed USB device number 7 using ehci_hcd [ 6815.706828] cdc_acm 1-1:1.0: ttyACM0: USB ACM device The last 3 lines are where I inserted the Internet key, then reconnected it. usb-device T: Bus=01 Lev=01 Prnt=01 Port=00 Cnt=01 Dev#= 7 Spd=480 MxCh= 0 D: Ver= 2.00 Cls=ef(misc ) Sub=02 Prot=01 MxPS=64 #Cfgs= 1 P: Vendor=1ee8 ProdID=0012 Rev=00.01 S: Manufacturer=Onda S: Product=MW833UP S: SerialNumber=9230B35D870F9CB7AE684EACC5C12BE5EC33B26E C: #Ifs= 2 Cfg#= 1 Atr=a0 MxPwr=500mA I: If#= 0 Alt= 0 #EPs= 1 Cls=02(commc) Sub=02 Prot=01 Driver=cdc_acm I: If#= 1 Alt= 0 #EPs= 2 Cls=0a(data ) Sub=00 Prot=00 Driver=cdc_acm Then there is /dev/ttyACM0. When the key is connected to the USB port, everything that will meant...

    Read the article

  • What useful things can one add to one's .bashrc?

    - by gyaresu
    Is there anything that you can't live without and will make my life SO much easier? Here are some that I use ('diskspace' & 'folders' are particularly handy). # some more ls aliases alias ll='ls -alh' alias la='ls -A' alias l='ls -CFlh' alias woo='fortune' alias lsd="ls -alF | grep /$" # This is GOLD for finding out what is taking so much space on your drives! alias diskspace="du -S | sort -n -r |more" # Command line mplayer movie watching for the win. alias mp="mplayer -fs" # Show me the size (sorted) of only the folders in this directory alias folders="find . -maxdepth 1 -type d -print | xargs du -sk | sort -rn" # This will keep you sane when you're about to smash the keyboard again. alias frak="fortune" # This is where you put your hand rolled scripts (remember to chmod them) PATH="$HOME/bin:$PATH"

    Read the article

  • SSH: Port Forwarding, Firewalls, & Plesk

    - by Kian Mayne
    I edited my SSH configuration to accept connections on Port 213, as it was one of the few ports that my work firewall allows through. I then restarted sshd and everything was going well. I tested the ssh server locally, and checked the sshd service was listening on port 213; however, I still cannot get it to work outside of localhost. PuTTY gives a connection refused message, and some of the sites that allow check of ports I tried said the port was closed. To me, this is either firewall or port forwarding. But I've already added inbound and outbound exceptions for it. Is this a problem with my server host, or is there something I've missed? My full SSH config file, as requested: # $OpenBSD: sshd_config,v 1.73 2005/12/06 22:38:28 reyk Exp $ # This is the sshd server system-wide configuration file. See # sshd_config(5) for more information. # This sshd was compiled with PATH=/usr/local/bin:/bin:/usr/bin # The strategy used for options in the default sshd_config shipped with # OpenSSH is to specify options with their default value where # possible, but leave them commented. Uncommented options change a # default value. Port 22 Port 213 #Protocol 2,1 Protocol 2 #AddressFamily any #ListenAddress 0.0.0.0 #ListenAddress :: # HostKey for protocol version 1 #HostKey /etc/ssh/ssh_host_key # HostKeys for protocol version 2 #HostKey /etc/ssh/ssh_host_rsa_key #HostKey /etc/ssh/ssh_host_dsa_key # Lifetime and size of ephemeral version 1 server key #KeyRegenerationInterval 1h #ServerKeyBits 768 # Logging # obsoletes QuietMode and FascistLogging #SyslogFacility AUTH SyslogFacility AUTHPRIV #LogLevel INFO # Authentication: #LoginGraceTime 2m #PermitRootLogin yes #StrictModes yes #MaxAuthTries 6 #RSAAuthentication yes #PubkeyAuthentication yes #AuthorizedKeysFile .ssh/authorized_keys # For this to work you will also need host keys in /etc/ssh/ssh_known_hosts #RhostsRSAAuthentication no # similar for protocol version 2 #HostbasedAuthentication no # Change to yes if you don't trust ~/.ssh/known_hosts for # RhostsRSAAuthentication and HostbasedAuthentication #IgnoreUserKnownHosts no # Don't read the user's ~/.rhosts and ~/.shosts files #IgnoreRhosts yes # To disable tunneled clear text passwords, change to no here! #PasswordAuthentication yes #PermitEmptyPasswords no PasswordAuthentication yes # Change to no to disable s/key passwords #ChallengeResponseAuthentication yes ChallengeResponseAuthentication no # Kerberos options #KerberosAuthentication no #KerberosOrLocalPasswd yes #KerberosTicketCleanup yes #KerberosGetAFSToken no # GSSAPI options #GSSAPIAuthentication no GSSAPIAuthentication yes #GSSAPICleanupCredentials yes GSSAPICleanupCredentials yes # Set this to 'yes' to enable PAM authentication, account processing, # and session processing. If this is enabled, PAM authentication will # be allowed through the ChallengeResponseAuthentication mechanism. # Depending on your PAM configuration, this may bypass the setting of # PasswordAuthentication, PermitEmptyPasswords, and # "PermitRootLogin without-password". If you just want the PAM account and # session checks to run without PAM authentication, then enable this but set # ChallengeResponseAuthentication=no #UsePAM no UsePAM yes # Accept locale-related environment variables AcceptEnv LANG LC_CTYPE LC_NUMERIC LC_TIME LC_COLLATE LC_MONETARY LC_MESSAGES AcceptEnv LC_PAPER LC_NAME LC_ADDRESS LC_TELEPHONE LC_MEASUREMENT AcceptEnv LC_IDENTIFICATION LC_ALL #AllowTcpForwarding yes #GatewayPorts no #X11Forwarding no X11Forwarding yes #X11DisplayOffset 10 #X11UseLocalhost yes #PrintMotd yes #PrintLastLog yes #TCPKeepAlive yes #UseLogin no #UsePrivilegeSeparation yes #PermitUserEnvironment no #Compression delayed #ClientAliveInterval 0 #ClientAliveCountMax 3 #ShowPatchLevel no #UseDNS yes #PidFile /var/run/sshd.pid #MaxStartups 10 #PermitTunnel no #ChrootDirectory none # no default banner path #Banner /some/path # override default of no subsystems Subsystem sftp /usr/libexec/openssh/sftp-server

    Read the article

  • Mac OS X 10.5/6, authenticate against by NIS or LDAP when both servers have your username

    - by Wang
    We have an organization-wide LDAP server and a department-only NIS server. Many users have accounts with the same name on both servers. Is there any way to get Leopard/Snow Leopard machines to query one server, and then the other, and let the user log in if his username/password combination matches at least one record? I can get either NIS authentication or LDAP authentication. I can even enable both, with LDAP set as higher priority, and authenticate using the name and password listed on the LDAP server. However, in the last case, if I set the LDAP domain as higher-priority in Directory Utility's search path and then provide the username/password pair listed in the NIS record, then my login is rejected even though the NIS server would accept it. Is there any way to make the OS check the rest of the search path after it finds the username?

    Read the article

  • Incorrect Meta information in Google

    - by Ashfame
    Google shows up incorrect meta info (title & description) in search engine results for an add domain and the information is of the domain which is the primary domain of the hosting account. I mentioned this fact because add-on domains are in a sub-directory of the primary domain. Any ideas what could be the reason? Check this Google search which shows the information of http://katherinegaudette.com/

    Read the article

  • How can I write byte[] to socket outputstream and specify the end of file?

    - by Christopher Francisco
    I've googled 2 days straight and I can't find how to do this. I have an open stream between client and server, and client will send a JSON string (encrypted to bytes) to the server each 3 to 5 seconds. How can I write to the Socket OutputStream so that I can read each JSON string in the server. I'm guessing I need to specify some kind of end of file or something, but can't find any info on how to achieve this.

    Read the article

  • Does the upload file INPUT HTML element give any personal information away?

    - by Senseful
    I'm wondering what personal information the file input (<input type="file">) element gives the website. I noticed that it does show the file name and the website does seem to have access to it. What about the file's path? If the file is located in My Documents, they could find out the user name via the path (e.g. C:\Documents and Settings\Bob\My Documents) which many times is the actual user's name that is using the website. What information do most modern browsers allow the website to access when a user uses the file input element? Could JavaScript somehow be used to gain more information? What about when plugins (such as Flash or Java) implement file uploading?

    Read the article

  • USB sector 0 not fount Kingston USB DT100 G2

    - by java
    Windows constantly asks me "Foramt Disk". when i go to command prompt and type format H: /fs:ntfs or format H: /fs:fat32 response: Cannot determine the number of sectors on this volume. if the benefit DISKPART detail disk Kingston DT 100 G2 USB Device Disk ID: 00000000 Type : USB Status : Online Path : 0 Target : 0 LUN ID : 0 Location Path : UNAVAILABLE Current Read-only State : No Read-only : No Boot Disk : No Pagefile Disk : No Hibernation File Disk : No Crashdump Disk : No Clustered Disk : No DISKPART detail volume Read-only : No Hidden : No No Default Drive Letter: No Shadow Copy : No Offline : No BitLocker Encrypted : No Installable : No Volume Capacity : 0 B Volume Free Space : 0 B what the problem?

    Read the article

  • external USB HD doesn't show up on 'sudo fdisk -l' on one Ubuntu, shows up on another (both 'precise')

    - by Menelaos
    I have a 1.5TB WD external USB HDD and two Ubuntu systems, both 'precise'. When I plug the disk on system A it doesn't show up on the output of sudo fdisk -l nor is it automatically mounted (note that that wasn't always the case - it used to appear in the past). When I plug it on system B (again Ubuntu precise) it shows up when I do a sudo fdisk -l (output appended at the end) and mounts automatically just fine. What does this discrepancy point to and what kind of diagnostics should I run / tools I should use to troubleshoot the problem? I followed the suggestion I received to do a sudo tail -f /var/log/syslog and the output is the following when I plug, unplug and replug the USB cable on System A: Sep 14 23:27:09 thorin mtp-probe: checking bus 2, device 3: "/sys/devices/pci0000:00/0000:00:13.2/usb2/2-2" Sep 14 23:27:09 thorin mtp-probe: bus: 2, device: 3 was not an MTP device Sep 14 23:28:01 thorin kernel: [ 338.994295] usb 2-2: USB disconnect, device number 3 Sep 14 23:28:04 thorin kernel: [ 341.808139] usb 2-2: new high-speed USB device number 4 using ehci_hcd Sep 14 23:28:04 thorin mtp-probe: checking bus 2, device 4: "/sys/devices/pci0000:00/0000:00:13.2/usb2/2-2" Sep 14 23:28:04 thorin mtp-probe: bus: 2, device: 4 was not an MTP device Sep 14 23:29:54 thorin AptDaemon: INFO: Quitting due to inactivity Sep 14 23:29:54 thorin AptDaemon: INFO: Quitting was requested (I guess the last two message are irrelevant). Output of sudo fdisk -l on System B $ sudo fdisk -l Disk /dev/sda: 1500.3 GB, 1500301910016 bytes 255 heads, 63 sectors/track, 182401 cylinders, total 2930277168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00070db4 Device Boot Start End Blocks Id System /dev/sda1 * 2048 2905114623 1452556288 83 Linux /dev/sda2 2905116670 2930276351 12579841 5 Extended /dev/sda5 2905116672 2930276351 12579840 82 Linux swap / Solaris Disk /dev/sdb: 250.1 GB, 250059350016 bytes 255 heads, 63 sectors/track, 30401 cylinders, total 488397168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x9c849c84 Device Boot Start End Blocks Id System /dev/sdb1 * 63 488375999 244187968+ 7 HPFS/NTFS/exFAT Disk /dev/sdg: 1500.3 GB, 1500299395072 bytes 255 heads, 63 sectors/track, 182401 cylinders, total 2930272256 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0003e17f Device Boot Start End Blocks Id System /dev/sdg1 2048 2930272255 1465135104 7 HPFS/NTFS/exFAT

    Read the article

  • Posting over at LIV Interactive

    - by D'Arcy Lussier
    First, no no no, I’m not leaving GWB! What I am going to be doing is contributing my business-focussed posts to a professional community that fellow Winnipegger Coree Francisco created called LIVInteractive! LIVInteractive publishes articles on business, design, development, content (marketing, copy, etc.), and community…and has some fantastic contributors providing new content regularly! Head on over and check the site out…lots of great info to be had! D

    Read the article

  • Building package from binary files - what's wrong with my control file

    - by Hannes de Jager
    I'm busy trying to build a .deb package from the binaries of my application (non open source) and I'm having trouble getting the correct info to display in the Ubuntu Software Centre (when you click on the .deb file). Please see screenshot below of control file and Software Centre View. It seems like the package name and the package description is swapped. I'm expecting the part in bold to read "attix5pro" and not "Cloud backup agent". Can someone show my my mistake or guide me?

    Read the article

  • APress Deal of the Day 11/Dec/2010 - Pro ASP.NET 4 in C# 2010, Fourth Edition

    - by TATWORTH
    Today's Apress Deal of the Days is Pro ASP.NET 4 in C# 2010, Fourth Edition - at $10 for the book, it is a bargain! I am currently reviewing this book and have been impressed by it. I suggest that you go over to http://www.apress.com/info/dailydeal and buy a copy. The book has a brief introduction to C# and then gives a thorough grounding in ASP.NET. The offer will be available to 08:00 Hrs UTC on the 12/Dec.

    Read the article

  • Schedule a batch file with parameters containing spaces

    - by Danilo Brambilla
    Hi, I need to schedule a task in Windows Server 2003 that executes this script that deletes files older that n days in the specified folder. The script needs 3 parameters: %1 path to folder where files need to be deleted %2 file names (es. *.log) %3 number of days @echo off forfiles -p %1 -s -m %2 -d -%3 -c "cmd /c del /q @path" The script works fine if the first parameter has no spaces inside. This is an example of parameters that work: "C:\Program Files\SCRIPT\DeleteFilesOlderThanXDays.cmd" N:\FOLDER\FOLDER *.zip 60 This is an example that does not work: "C:\Program Files\SCRIPT\DeleteFilesOlderThanXDays.cmd" N:\Program Files\LOG *.zip 60 This does not work too: "C:\Program Files\SCRIPT\DeleteFilesOlderThanXDays.cmd" "N:\Program Files\LOG" *.zip 60 I think it would be a quotes problem but I can't figure out the solution. I'd like not to insert values directly into the script if possible Thank you all for help

    Read the article

  • Ubuntu getting wrong hostname from DHCP

    - by sam
    When provisioning new Ubuntu Precise (12.04) servers, the hostname they're getting seems to be generated from the DNS search path, not a reverse lookup on the hostname. Take the following configuration BIND is configured with the hostname, and reverse name Normal zone $TTL 600 $ORIGIN srv.local.net. @ IN SOA ns0.local.net. hostmaster.local.net. ( 2014082101 10800 3600 604800 600 ) @ IN NS ns0.local.net. @ IN MX 5 mail.local.net. my-new-server IN A 10.32.2.30 And reverse @ IN SOA ns0.local.net. hostmaster.local.net. ( 2014082101 10800 3600 604800 600 ) @ IN NS ns0.local.net. $ORIGIN 32.10.in-addr.arpa. 30.2 IN PTR my-new-server.srv.local.net. Then DHCPD is configured to hand out static leases based on mac addresses like so subnet 10.32.2.0 netmask 255.255.254.0 { option subnet-mask 255.255.254.0; option routers 10.32.2.1; option domain-name-servers 10.32.2.1; option domain-name "util.of1.local.net of1.local.net srv.local.net"; site-option-space "pxelinux"; option pxelinux.magic f1:00:74:7e; if exists dhcp-parameter-request-list { option dhcp-parameter-request-list = concat(option dhcp-parameter-request-list,d0,d1,d2,d3); } group { option pxelinux.configfile "pxelinux.cfg/pxeboot"; host my-new-server { fixed-address my-new-server.srv.local.net; hardware ethernet aa:aa:aa:bb:bb:bb; } } } So the hostname should be my-new-server.srv.local.net, however when building a Ubuntu 12.04 node, the hostname ends up as my-new-server.util.of1.local.net When building Lucid (10.04) hosts, the hostname will be correct, it's only on Precise/12.04 nodes we have the problem. Doing a normal and reverse lookup on the host and IP returns the correct result Sams-MacBook-Pro:~ sam$ host my-new-server my-new-server.srv.local.net has address 10.32.2.30 Sams-MacBook-Pro:~ sam$ host my-new-server.srv.local.net my-new-server.srv.local.net has address 10.32.2.30 Sams-MacBook-Pro:~ sam$ host 10.32.2.30 30.2.32.10.in-addr.arpa domain name pointer my-new-server.srv.local.net. The contents of the hosts file is incorrect too 127.0.0.1 localhost 127.0.1.1 my-new-server.util.of1.local.net of1.local.net srv.local.net my-new-server So it looks like when it creates the hosts file, it puts the entire contents of the DNS search path into the local address so the FQDN according to the server is the short hostname as defined, then the first domain in the search path. Is there a way to get around this behaviour, or fix this so it gets the hostname correctly? It's picking up the first part of the hostname, then the rest is wrong.

    Read the article

  • VMWare tools on Ubuntu Server 10.10 kernel source problem

    - by Hamid Elaosta
    After install and running the vm-ware config, the config needs my kernel headers to compile some modules, ok, so I'll give it them, but it just won't work. It asks for the path of the directory of C header files that match my running kernel. If I uname -r I get 2.6.35-22-generic-pae So I tell it the source path is /lib/modules/2.6.25-22-generic-pae/build/include and it returns "The directory of kernel headers (version @@VMWARE@@ UTS_RELEASE) does not match your running kernel (version 2.6.35-22-generic-pae). ..I'm confused? can anyone offer suggestions please? I installed hte kernel source andh eaders myself using sudo apt-get install linux-headers-$(uname -r)

    Read the article

< Previous Page | 279 280 281 282 283 284 285 286 287 288 289 290  | Next Page >