Search Results

Search found 65244 results on 2610 pages for 'jim work'.

Page 178/2610 | < Previous Page | 174 175 176 177 178 179 180 181 182 183 184 185  | Next Page >

  • How Will Mac OS X Snow Leopard Upgrade Work?

    - by Blaenk
    I am relatively new to Mac OS X. I got my MacBook in January, and I have never experienced a new version of the operating system. I am wondering if I should simply upgrade my install to Snow Leopard. I come from Windows where it is advised to do a complete reformat. I would rather not do this, however, and I have a feeling that due to Mac OS X' POSIX based nature, it might actually not be all that bad if I upgrade. I guess if things end up screwing up I can simply go ahead and reformat, but I am wondering what it is like to upgrade systems running Mac OS X. I wouldn't want my Snow Leopard installation to be somehow deficient due to certain inconsistencies within the system.

    Read the article

  • How to get back-to-work with a Windows 7 PC that has no admin account?

    - by Nam Gi VU
    Hi everyone, I have a PC which doesn't have the Administrator account active and the only user account left is a Guest user. I want to get back the admin account but I don't know how to do that with a guest user. I have tried searching the internet and try to use the Recovery Mode but adding/activating the admin account from DOS not working for me at all. Please help if you meet & solve it before! Thank you, Nam. ps. You can see my diigo try on solving this problem.

    Read the article

  • Do certain usb ports work better on my IBM T60?

    - by Xavierjazz
    Hi. I am using a Microsoft wireless phone ear piece and I have had the receiver plugged into the USB port on the Left Hand side. I have been getting intermittent success with the signal. I have recently tried plugging it in to one of the ports on the top Right Hand side, and it seems that I am getting a better signal. Is there a difference between the ports? Thanks.

    Read the article

  • How do I make subsonic (media server) work with SSL?

    - by John Baber
    The roughly out-of-the-box setup as a regular user works fine (meaning the site appears at http://myserver.com:4040). From ps aux java -Xmx100m -Dsubsonic.home=/var/subsonic -Dsubsonic.host=0.0.0.0 -Dsubsonic.port=4040 -Dsubsonic.httpsPort=0 -Dsubsonic.contextPath=/ -Dsubsonic.defaultMusicFolder=/var/music -Dsubsonic.defaultPodcastFolder=/var/music/Podcast -Dsubsonic.defaultPlaylistFolder=/var/playlists -Djava.awt.headless=true -verbose:gc -jar subsonic-booter-jar-with-dependencies.jar but just giving an https port java -Xmx100m -Dsubsonic.home=/var/subsonic -Dsubsonic.host=0.0.0.0 -Dsubsonic.port=4040 -Dsubsonic.httpsPort=6060 -Dsubsonic.contextPath=/ -Dsubsonic.defaultMusicFolder=/var/music -Dsubsonic.defaultPodcastFolder=/var/music/Podcast -Dsubsonic.defaultPlaylistFolder=/var/playlists -Djava.awt.headless=true -verbose:gc -jar subsonic-booter-jar-with-dependencies.jar makes http://myserver.com:4040 say HTTP ERROR: 404 NOT_FOUND RequestURI=/index.view Powered by jetty:// and https://myserver.com:6060 say Unable to connect I'm only making the change by doing # SUBSONIC_ARGS="--port=80 --https-port=443 --max-memory=120" SUBSONIC_ARGS="--max-memory=100 --https-port=6060" in /etc/default/subsonic and issuing a sudo service subsonic restart (this is Ubuntu Oneiric)

    Read the article

  • Wi-Fi Stick with ZD1211 chip refuses to work on Ubuntu >8.10. No clue.

    - by Benjamin Maus
    I have a machine running Ubuntu 9.10 (Karmic *x86_64*). Everything is running smooth so far, except for the Wi-Fi USB Stick. The same device worked perfectly in 8.10. The wireless device is a GW-US54GXS using the Zydas Zd1211 chipset. Dmesg output after plugging in: [ 196.303436] phy0: Selected rate control algorithm 'minstrel' [ 196.304209] zd1211rw 2-1:1.0: phy0 [ 196.304227] usbcore: registered new interface driver zd1211rw [ 196.334137] usb 2-1: firmware: requesting zd1211/zd1211b_ub [ 196.357463] usb 2-1: firmware: requesting zd1211/zd1211b_uphr [ 196.402643] zd1211rw 2-1:1.0: firmware version 4725 [ 196.442611] zd1211rw 2-1:1.0: zd1211b chip 2019:5303 v4810 high 00-90-cc AL2230_RF pa0 ---N- [ 196.463814] usb 2-1: firmware: requesting zd1211/zd1211b_ub [ 196.466823] usb 2-1: firmware: requesting zd1211/zd1211b_uphr Syslog output: Nov 5 11:20:24 somesystem kernel: [ 196.303436] phy0: Selected rate control algorithm 'minstrel' Nov 5 11:20:24 kierkegaard NetworkManager: <info> Found radio killswitch rfkill0 (at /sys/devices/pci0000:00/0000:00:1d.7/usb2/2-1/2-1:1.0/ieee80211/phy0/rfkill0) (driver <unknown>) Nov 5 11:20:24 somesystem kernel: [ 196.304209] zd1211rw 2-1:1.0: phy0 Nov 5 11:20:24 somesystem kernel: [ 196.304227] usbcore: registered new interface driver zd1211rw Nov 5 11:20:24 somesystem NetworkManager: SCPlugin-Ifupdown: devices added (path: /sys/devices/pci0000:00/0000:00:1d.7/usb2/2-1/2-1:1.0/net/wmaster0, iface: wmaster0) Nov 5 11:20:24 somesystem NetworkManager: SCPlugin-Ifupdown: device added (path: /sys/devices/pci0000:00/0000:00:1d.7/usb2/2-1/2-1:1.0/net/wmaster0, iface: wmaster0): no ifupdown configuration found. Nov 5 11:20:24 somesystem NetworkManager: SCPlugin-Ifupdown: devices added (path: /sys/devices/pci0000:00/0000:00:1d.7/usb2/2-1/2-1:1.0/net/wlan0, iface: wlan0) Nov 5 11:20:24 somesystem NetworkManager: SCPlugin-Ifupdown: device added (path: /sys/devices/pci0000:00/0000:00:1d.7/usb2/2-1/2-1:1.0/net/wlan0, iface: wlan0): no ifupdown configuration found. Nov 5 11:20:24 somesystem NetworkManager: <info> (wlan0): driver supports SSID scans (scan_capa 0x01). Nov 5 11:20:24 somesystem NetworkManager: <info> (wlan0): new 802.11 WiFi device (driver: 'zd1211rw') Nov 5 11:20:24 somesystem NetworkManager: <info> (wlan0): exported as /org/freedesktop/NetworkManager/Devices/2 Nov 5 11:20:24 somesystem NetworkManager: <info> (wlan0): now managed Nov 5 11:20:24 somesystem NetworkManager: <info> (wlan0): device state change: 1 -> 2 (reason 2) Nov 5 11:20:24 somesystem NetworkManager: <info> (wlan0): bringing up device. Nov 5 11:20:24 somesystem kernel: [ 196.334137] usb 2-1: firmware: requesting zd1211/zd1211b_ub Nov 5 11:20:24 somesystem kernel: [ 196.357463] usb 2-1: firmware: requesting zd1211/zd1211b_uphr Nov 5 11:20:24 somesystem kernel: [ 196.402643] zd1211rw 2-1:1.0: firmware version 4725 Nov 5 11:20:24 somesystem kernel: [ 196.442611] zd1211rw 2-1:1.0: zd1211b chip 2019:5303 v4810 high 00-90-cc AL2230_RF pa0 ---N- Nov 5 11:20:24 somesystem NetworkManager: <WARN> nm_device_hw_bring_up(): (wlan0): device not up after timeout! Nov 5 11:20:24 somesystem NetworkManager: <info> (wlan0): deactivating device (reason: 2). Nov 5 11:20:24 somesystem kernel: [ 196.463814] usb 2-1: firmware: requesting zd1211/zd1211b_ub Nov 5 11:20:24 somesystem kernel: [ 196.466823] usb 2-1: firmware: requesting zd1211/zd1211b_uphr Nov 5 11:20:29 somesystem wpa_supplicant[978]: Could not set interface 'wlan0' UP Nov 5 11:20:29 somesystem wpa_supplicant[978]: Failed to initialize driver interface Nov 5 11:20:29 somesystem NetworkManager: <WARN> nm_supplicant_interface_add_cb(): Unexpected supplicant error getting interface: wpa_supplicant couldn't grab this interface. Gnome tells me in the network menu that the device was "not ready". It appears in iwconfig but not in ifconfig. The same symptoms appear when I boot from the live CD. How can I solve this dilemma?

    Read the article

  • File copying software to do this kind of work... in Windows 7 32 bit

    - by Senthil
    I need a software (Windows 7 32bit) to help me in this process: I have my documents, music, video clips, movies, pictures in my hard disk. These will not be scattered around the system. But will be inside C:\Senthil\ At the end of every week, I want to plug in an external hard disk and run a software that should make sure whatever is inside C:\Senthil\ is also present in the external disk. Files deleted from C:\Senthil\ should be deleted there, and new files should be copied etc... at the end of the process, every bit inside the source folder in my internal disk should be inside my external disk. A couple of important requirements and points: I do NOT need multiple versions or historic versions. I don't need the previous versions of my files. I only want the latest copy to be present in my "backup". Incremental backup makes sense. If files were not touched since the last backup, it need not copy. The size of my folder will run into GBs and in a year or two will go into TBs. But I will make sure the size of the external HDD is equal to or bigger than my source folder. I do not want it to run automatically because when I accidentally delete a file in my source, it will delete the one in the backup (I know this is why we have versioning facilities). I just want to be able to run it manually so that I am in control of when the backup is made and what is backed up and I should be able to pick something from the backup and restore it to the source folder in the above situation. Is there any software that will let me do exactly this? I don't want any other "smart" facility of the software to interfere with this process. I know what I want and the software can keep its smartness to itself :D The main reason I am asking this question is, I am a software developer and I can write this software myself. But I am a little constrained by time at the moment and I want to know if there is an existing program that can do this. Kindly don't worry about earthquakes or fire or snowstorms and bring up the "in case of a natural disaster your backup will also be in the damage zone and will be lost" argument because: I will have bigger things to worry about than my holiday memories. I don't think I will digitally store any life-ruining documents. This backup is only to avoid the inconvenience of obtaining a new copy of stuff that I have. Not to protect them against the end of the world. I am more worried about power surges in my area frying my system, hard disk failure, children who merrily hit Delete or teens who hit Shift + Delete or myself getting a little careless at times! In short: Is there a file/folder syncing software that listens to what I say and doesn't try to act smart? Please forgive me if I sound arrogant :)

    Read the article

  • Using Cygwin in Windows 8, chmod 600 does not work as expected?

    - by Castaa
    I'm trying to change the the permissions to my key file key.pem in Cygwin 1.7.11. It has the permissions flags: -rw-rw---- chmod -c 600 key.pem Reports: mode of 'key.pem' changed from 0660 (rw-rw----) to 0600 (rw-------) However: ls -l key.pem still reports key.pem's permission flags are still: -rw-rw---- This reason why I'm asking is that ssh is complaining: Permissions 0660 for 'key.pem' are too open. when I try to ssh into my Amazon EC2 instance. Is this an issue with Cygwin & Windows 8 NTFS or am I missing something?

    Read the article

  • Will these instructions work when turning of journaling on an ext4 SSD?

    - by snowlord
    I have an Acer Aspire One with an SSD for storage. I recently installed Ubuntu on it and chose ext4 for my filesystem. Then I read that journaling on an SSD isn't the best idea, so I will try to disable journaling and I have found these intstructions (from http://fenidik.blogspot.com/2010/03/ext4-disable-journal.html): # Create ext4 fs on /dev/sda10 disk mkfs.ext4 /dev/sda10 # Enable writeback mode. This mode will typically provide the best ext4 performance. tune2fs -o journal_data_writeback /dev/sda10 # Delete has_journal option tune2fs -O ^has_journal /dev/sda10 # Required fsck e2fsck -f /dev/sda10 # Check fs options dumpe2fs /dev/sda10 |more For more performance add fstab opions: data=writeback,noatime,nodiratime i.e: /dev/sda10 /opt ext4 defaults,data=writeback,noatime,nodiratime 0 0 I will use them on my boot partition. Are there any particularly bad parts here, or are there any missing steps? Will my boot partition be fit for being on an SSD after this? Or should I consider switching to ext2, or even reinstall it all and choose ext2 at partitioning time (I'd rather not though, since I've configured quite some stuff already)?

    Read the article

  • How can I make this if's work in Bash ?

    - by Dragos
    In bash how can I make a construction like this to work: if (cp /folder/path /to/path) && (cp /anotherfolder/path /to/anotherpath) then echo "Succeeded" else echo "Failed" fi The if should test for the $? return code of each command and tie them with &&. How can I make this in Bash ?

    Read the article

  • Does gunzip work in memory or does it write to disk?

    - by Ryan Detzel
    We have our log files gzipped to save space. Normally we keep them compressed and just do gunzip -c file.gz | grep 'test' to find important information but we're wondering if it's quicker to keep the files uncompressed and then do the grep. cat file | grep 'test' There has been some discussions about how gzip works if it would make sense that if it reads it into memory and unzips then the first one would be faster but if it doesn't then the second one would be faster. Does anyone know how gzip uncompresses data?

    Read the article

  • How to define and work with an array of bits in C?

    - by Eddy
    I want to create a very large array on which I write '0's and '1's. I'm trying to simulate a physical process called random sequential adsorption, where units of length 2, dimers, are deposited onto an n-dimensional lattice at a random location, without overlapping each other. The process stops when there is no more room left on the lattice for depositing more dimers (lattice is jammed). Initially I start with a lattice of zeroes, and the dimers are represented by a pair of '1's. As each dimer is deposited, the site on the left of the dimer is blocked, due to the fact that the dimers cannot overlap. So I simulate this process by depositing a triple of '1's on the lattice. I need to repeat the entire simulation a large number of times and then work out the average coverage %. I've already done this using an array of chars for 1D and 2D lattices. At the moment I'm trying to make the code as efficient as possible, before working on the 3D problem and more complicated generalisations. This is basically what the code looks like in 1D, simplified: int main() { /* Define lattice */ array = (char*)malloc(N * sizeof(char)); total_c = 0; /* Carry out RSA multiple times */ for (i = 0; i < 1000; i++) rand_seq_ads(); /* Calculate average coverage efficiency at jamming */ printf("coverage efficiency = %lf", total_c/1000); return 0; } void rand_seq_ads() { /* Initialise array, initial conditions */ memset(a, 0, N * sizeof(char)); available_sites = N; count = 0; /* While the lattice still has enough room... */ while(available_sites != 0) { /* Generate random site location */ x = rand(); /* Deposit dimer (if site is available) */ if(array[x] == 0) { array[x] = 1; array[x+1] = 1; count += 1; available_sites += -2; } /* Mark site left of dimer as unavailable (if its empty) */ if(array[x-1] == 0) { array[x-1] = 1; available_sites += -1; } } /* Calculate coverage %, and add to total */ c = count/N total_c += c; } For the actual project I'm doing, it involves not just dimers but trimers, quadrimers, and all sorts of shapes and sizes (for 2D and 3D). I was hoping that I would be able to work with individual bits instead of bytes, but I've been reading around and as far as I can tell you can only change 1 byte at a time, so either I need to do some complicated indexing or there is a simpler way to do it? Thanks for your answers

    Read the article

  • How do I connect to SSH without the password to be requested every time ? - Already follow some answers here but it doesn't work

    - by MEM
    MAC OS X Lion 10.7.3 1) On host, I've created an authorized_keys file inside .ssh folder, by doing: touch authorized_keys 2) I've copy my public ssh key into host .ssh folder by doing: scp ~/.ssh/mykey.pub [email protected]:/home/userhost/.ssh/mykey.pub 3) I've place it's contents inside authorized files by doing: cat mykey.pub >> authorized_keys 4) Then I've removed the mykey.pub file: rm mykey.pub 5) On my terminal, locally, inside my ~/.ssh folder I made: ssh-add mykey (notice that it is without the pub extension); 6) I've closed and opened again the terminal. When I first connect to this host, it has being added to the *known_hosts* file inside ~/.ssh; I've pico known_hosts and the hash is there. Still, every time I connect by doing: ssh [email protected] it requests a password ! What am I missing here ? UPDATE: I've done EVEN TWO MORE THINGS here: 7) Set your key to be the default identity - if it doesn't exist, create; touch ~/.ssh/config and place inside the following line: IdentityFile ~/.ssh/yourkeyname *id_rsa is normally your default key. You should switched to your key. This tells that the outgoing ssh connections should use this as a default identity.* 8) Add a bash process to your ssh-agent: ssh-agent bash ssh-add ~/.ssh/yourkeyname Lisinge answer helped but it's not definitive. If we restart our machine, the password gets prompted again!!! How can we debug this? What can we do here? How can we check where is this process failing ? UPDATE 2: If I use: ssh -v -i <keyfile> [email protected] I get among other things: OpenSSH_5.6p1, OpenSSL 0.9.8r 8 Feb 2011 Warning: Identity file yourkeyname not accessible: No such file or directory. This message refers to what? The identify file is not accessible on the localhost, or it's not accessible on the remote host ? Please advice

    Read the article

  • What might cause the Save and Open dialog windows to just not work?

    - by Nick Bolton
    In several applications, the Save and Load dialog windows are just not showing. And in notepad (which obviously handles the return code) is reporting an out of memory issue, which I'm sure is not the case; I think it's assuming it's out of memory as it can't get the window handle. In any case, there's something definitely wrong with Windows, but there's nothing in the event log. Any idea why this might happen?

    Read the article

  • I have a NGINX server configured to work with node.js, but many times a file of 1.03MB of js is not loaded by various browser and various pc

    - by Totty
    I'm using this in a local LAN so it should be quite fast. The nginx server use the node.js server to serve static files, so it must pass throught node.js to download the files, but that is not a problem when I'm not using the nginx. In chrome with debugger on I can see that the status is: 206 - partial content and it only has downloaded 31KB of 1.03MB. After 1.1 min it turns red and the status failed. Waiting time: 6ms Receiving: 1.1 min The headers in google chrom: Request URL:http://192.168.1.16/production/assembly/script/production.js Request Method:GET Status Code:206 Partial Content Request Headersview source Accept:*/* Accept-Charset:ISO-8859-1,utf-8;q=0.7,*;q=0.3 Accept-Encoding:gzip,deflate,sdch Accept-Language:pt-PT,pt;q=0.8,en-US;q=0.6,en;q=0.4 Connection:keep-alive Cookie:connect.sid=s%3Abls2qobcCaJ%2FyBNZwedtDR9N.0vD4Fi03H1bEdCszGsxIjjK0lZIjJhLnToWKFVxZOiE Host:192.168.1.16 If-Range:"1081715-1350053827000" Range:bytes=16090-16090 Referer:http://192.168.1.16/production/assembly/ User-Agent:Mozilla/5.0 (Windows NT 6.0) AppleWebKit/537.4 (KHTML, like Gecko) Chrome/22.0.1229.94 Safari/537.4 Response Headersview source Accept-Ranges:bytes Cache-Control:public, max-age=0 Connection:keep-alive Content-Length:1 Content-Range:bytes 16090-16090/1081715 Content-Type:application/javascript Date:Mon, 15 Oct 2012 09:18:50 GMT ETag:"1081715-1350053827000" Last-Modified:Fri, 12 Oct 2012 14:57:07 GMT Server:nginx/1.1.19 X-Powered-By:Express My nginx configurations: File 1: user totty; worker_processes 4; pid /var/run/nginx.pid; events { worker_connections 768; # multi_accept on; } http { ## # Basic Settings ## sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; # server_tokens off; # server_names_hash_bucket_size 64; # server_name_in_redirect off; include /etc/nginx/mime.types; default_type application/octet-stream; ## # Logging Settings ## access_log /home/totty/web/production01_server/node_modules/production/_logs/_NGINX_access.txt; error_log /home/totty/web/production01_server/node_modules/production/_logs/_NGINX_error.txt; ## # Gzip Settings ## gzip on; gzip_disable "msie6"; # gzip_vary on; # gzip_proxied any; # gzip_comp_level 6; # gzip_buffers 16 8k; # gzip_http_version 1.1; # gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript; ## # nginx-naxsi config ## # Uncomment it if you installed nginx-naxsi ## #include /etc/nginx/naxsi_core.rules; ## # nginx-passenger config ## # Uncomment it if you installed nginx-passenger ## #passenger_root /usr; #passenger_ruby /usr/bin/ruby; ## # Virtual Host Configs ## autoindex on; include /home/totty/web/production01_server/_deployment/nginxConfigs/server/*; } File that is included by the previous file: server { # custom location for entry # using only "/" instead of "/production/assembly" it # would allow you to go to "thatip/". In this way # we are limiting to "thatip/production/assembly/" location /production/assembly/ { # ip and port used in node.js proxy_pass http://127.0.0.1:3000/; } location /production/assembly.mongo/ { proxy_pass http://127.0.0.1:9000/; proxy_redirect off; } location /production/assembly.logs/ { autoindex on; alias /home/totty/web/production01_server/node_modules/production/_logs/; } }

    Read the article

  • How does pptpd (poptop) or pppd work with eap-tls and mppe-128?

    - by Henk
    To create a VPN I've installed pptpd on an Ubuntu domU (Debian domUs can also be created). MSCHAPv2 isn't a very strong authentication protocol so I'd like to use EAP-TLS. I've set up a FreeRADIUS server and certificates for EAP-TLS before (for use with WPA), and I've also set up a pptp server with mschap-v2 auth, but I can't figure out how to combine the two. Maybe pppd can use EAP-TLS on its own, but I can't find support for it in the Ubuntu package. If I need to patch the package, that's fine, I know how to patch Debian packages (provided the patch applies cleanly). Also, can MPPE still be used when pppd is configured to use EAP? Because it says in the manual several times that MPPE requires MSCHAP. However, other docs like this one: http://www.nikhef.nl/~janjust/ppp/ seem to refute that. The clients are running Mac OS X Leopard and GNU/Linux, there's no need to fix anything for Windows.

    Read the article

  • How does Firefox sync really work (when adding new devices)?

    - by tim11g
    I'm adding some less frequently used computers to my Firefox sync account. These computers were previously synced using Foxmarks BYOS. When I started using Firefox Sync, I deleted some old bookmarks. Later, as I added some other machines, old bookmarks (that still existed on the other machines) were synced back to my main machine. To prevent that from happening, I wonder if I perhaps need to delete all the bookmarks from new machines before adding them to the Sync account. But then I worry that it might sync the deletion of all the bookmarks and delete them all from the server and my other machines. Is there any documentation on the exact syncing behavior in the case of adding new devices? Is there any way to monitor progress and sync status? Is there any way to cause a "one way" sync for first time connection (sync server to browser only, overwriting everything in the browser? Is there any way to see a list of devices that are associated, and the last time they have synced? Thanks!

    Read the article

< Previous Page | 174 175 176 177 178 179 180 181 182 183 184 185  | Next Page >