Search Results

Search found 6774 results on 271 pages for 'special locations'.

Page 110/271 | < Previous Page | 106 107 108 109 110 111 112 113 114 115 116 117  | Next Page >

  • Windows 7 - Add folder to Explorer Favorites navigation pane from the Command Line

    - by nondescript1
    In Windows 7 is there a way to add a location to the Explorer Favorites navigation pane from the command line? I'm working with systems that are frequently re-imaged, and I would like to automate adding a number of favorite folders to explorer. I imagine these favorites are also stored in the registry. If someone knows where, I could probably automate managing them through the reg command, although this is less than ideal. I've looked at a number of locations related to explorer suggested here, but haven't found them yet. For information on customizing the favorites section of the navigation pane with Explorer, see http://www.howtogeek.com/howto/10357/add-your-own-folders-to-favorites-in-windows-7/

    Read the article

  • Windows 7 Wireless Network Adapter Stopped Working

    - by Andrew B Schultz
    I have a Windows 7 Ultimate machine where the wireless adapter all of a sudden started having trouble connecting to wireless networks. Whenever I go to a new place and try to connect to a wireless network, it says that the DNS server is not responding, and tells me to go unplug the router and try again. After several locations in a row telling me this, I began to realize something was wrong with my adapter, not the routers. I am no longer asked to identify the security level for any new networks (Work, Home, or Public) like I used to be (it defaults to Public now - with the park bench icon). Often, resetting the router doesn't even work. Running the Windows 7 troubleshooter doesn't give me anything better than the advice to reset the router. However, the adapter will still connect to the wireless network at my main office without any problems. Does anyone know why a wireless network adapter can get so finicky so suddenly? Thanks!

    Read the article

  • How to perform fresh linux install while preserving software raid and user accounts

    - by slayton
    I have a system with two software raid arrays. The OS is Ubuntu 9.04 and is no longer receiving updates. I'd like to update the system to 12.04 rather than trying to do the automatic update from 9.04-> 9.10-> ... -> 12.04. My main drive has 2 partitions that are mounted at / and /home. Is it possible to do a fresh install of linux to the partition where / is mounted while preserving user accounts and preferences (such as passwords, home dir locations, etc...)? Additionally what do I need to do to keep my software raid array intact following the OS re-install?

    Read the article

  • Can I change the system's "Browse for Folder" dialog globally?

    - by Chris Phillips
    As far as I know, everyone hates the "Browse for Folder" dialog: This dialog is always too small, rarely remembers locations well, and worst of all: forces you to navigate your entire computer using a tedious tree structure. Now, to be fair, some of the problems are likely to do with how apps are invoking the control -- not setting a size or a default directory, etc. But the problem about the tedious tree control remains. Is there any way to customize your Windows installation to use a different control? Preferably an app/installer that does it for you safely, but dropping in a compatible DLL or similar technique would be okay too. Or are we stuck with this terrible control forever?

    Read the article

  • use network drives as mount points during installation?

    - by ajsie
    is it possible to use network storage locations as mount points during installation? cause i want to separate system (ubuntu) with data (personal files). eg. if i have 5 computers i don't want to recreate /home/david 5 times. so i want to mount networkdrive/home to /home in local ubuntu server. so ALL users home folders could be used and maybe also networkdrive/projects to /projects. in that way its ok if i by accident repartitioned the local ubuntu server cause all data is not there on that server, but in the data server. is separating "data" from "logic" good in this case? and is it possible? what protocol should i use for the mapping over internet? (maybe the server is in Sweden, and the data is in Norway). thanks.

    Read the article

  • Determine Server specs for a Rails with MySQL database (on AWS)

    - by Rogier
    I developed a intranet applications with Rails (3.2) for one of my customers. There will be around 30-40 employees working with it. Backend is MySQL (5). What would be the best way to determine the servers specs needed? Given: max. load will be roughly 2400 (40*60) HTTP requests (mixed GET / POST) per hour. 15% of these calls are JSON calls (iOS) avg request will make between 5-10 database calls 500-800 SQL INSERTS per day webpages are fairly simple (no images, just text) avg webpage is 15 request (css/js/etc) and total size is 35-45 KB More specific, since they need access from multiple geographical locations, we are thinking of running a bitnami Ruby stack in the AWS cloud (uptime is important). Any thoughts on a AWS Instance (small/medium) and Utilization (light/medium/heavy) ? Thanks!

    Read the article

  • Simulating mouse clicks at specific screen coordinates

    - by Matteo Riva
    I would like to be able to bind some keys to mouse clicks done in specific locations. For example: when I press F1 I should get a left mouse click at coordinates 300x350, F2 at 600x350 and so on. Even better if this could be bound to a specific window application so that coordinates could be relative to it instead of the base desktop. Is there a software which allows this? ADDITION: Ok autohotkey is great but I have problems with my particular setup. Quoting my comment below: I'm using it with an old game (championship manager 01/02) which runs in windowed mode (and I have to set win98 compatibility for it to run): I can get the mouse to move but no click goes to the application I have read this FAQ but it didn't help, this is the script I tried: SendMode Play SetKeyDelay, 0, 50, Play F1::Click 42, 191 F2::ControlSend ahk_class main, Click, Championship Manager 01/02 Still no luck: pointer moves but no click goes through.

    Read the article

  • Using Google's App Engine as CDN for static files

    - by Saif Bechan
    I am planning on moving my static files to Google's App Engine. I was wondering if this is a good idea to do. I have read that is it possible that Google will cache your files on multiple locations, which is a good thing in my opinion. The setup should also be quite easy in eclipse with the GAE plugins. But i still have my doubts on the performance of this. Is the setup of App Engine optimized for serving static content. Now I have Nginx server my static content, will App Engine perform the same way. Are there any other ups or downs using this method?

    Read the article

  • Methodologies for performance-testing a WAN link

    - by Chopper3
    We have a pair of new diversely-routed 1Gbps Ethernet links between locations about 200 miles apart. The 'client' is a new reasonably-powerful machine (HP DL380 G6, dual E56xx Xeons, 48GB DDR3, R1 pair of 300GB 10krpm SAS disks, W2K8R2-x64) and the 'server' is a decent enough machine too (HP BL460c G6, dual E55xx Xeons, 72GB, R1 pair of 146GB 10krpm SAS disks, dual-port Emulex 4Gbps FC HBA linked to dual Cisco MDS9509s then onto dedicated HP EVA 8400 with 128 x 450GB 15krpm FC disks, RHEL 5.3-x64). Using SFTP from the client we're only seeing about 40Kbps of throughput using large (2GB) files. We've performed server to 'other local server' tests and see around 500Mbps through the local switches (Cat 6509s), we're going to do the same on the client side but that's a day or so away. What other testing methods would you use to prove to the link providers that the problem is theirs?

    Read the article

  • how can I pass an environment variable through an ssh command?

    - by Ross Rogers
    How can I pass a value into an ssh command, such that the environment that is started on the host machine starts with a certain environment variable set to my choosing? EDIT: The goal is to pass the current kde desktop ( from dcop kwin KWinInterface currentDesktop ) to the new shell created so that I can pass back an nfs locations to my JEdit instance on the original server which is unique for each KDE desktop. ( Using a mechanism like emacsserver/emacsclient) The reason multiples ssh instances can be in flight at one time is because when I'm setting up my environment, I'm opening a bunch of different ssh instances to different machines.

    Read the article

  • curl can't verify cert using capath, but can with cacert option

    - by phylae
    I am trying to use curl to connect to a site using HTTPS. But curl is failing to verify the SSL cert. $ curl --verbose --capath ./certs/ --head https://example.com/ * About to connect() to example.com port 443 (#0) * Trying 1.1.1.1... connected * Connected to example.com (1.1.1.1) port 443 (#0) * successfully set certificate verify locations: * CAfile: none CApath: ./certs/ * SSLv3, TLS handshake, Client hello (1): * SSLv3, TLS handshake, Server hello (2): * SSLv3, TLS handshake, CERT (11): * SSLv3, TLS alert, Server hello (2): * SSL certificate problem, verify that the CA cert is OK. Details: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed * Closing connection #0 curl: (60) SSL certificate problem, verify that the CA cert is OK. Details: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed More details here: http://curl.haxx.se/docs/sslcerts.html curl performs SSL certificate verification by default, using a "bundle" of Certificate Authority (CA) public keys (CA certs). If the default bundle file isn't adequate, you can specify an alternate file using the --cacert option. If this HTTPS server uses a certificate signed by a CA represented in the bundle, the certificate verification probably failed due to a problem with the certificate (it might be expired, or the name might not match the domain name in the URL). If you'd like to turn off curl's verification of the certificate, use the -k (or --insecure) option. I know about the -k option. But I do actually want to verify the cert. The certs directory has been properly hashed with c_rehash . and it contains: A Verisign intermediate cert Two self-signed certs The above site should be verified with the Verisign intermediate cert. When I use the --cacert option instead (and point directly to the Verisign cert) curl is able to verify the SSL cert. $ curl --verbose --cacert ./certs/verisign-intermediate-ca.crt --head https://example.com/ * About to connect() to example.com port 443 (#0) * Trying 1.1.1.1... connected * Connected to example.com (1.1.1.1) port 443 (#0) * successfully set certificate verify locations: * CAfile: ./certs/verisign-intermediate-ca.crt CApath: /etc/ssl/certs * SSLv3, TLS handshake, Client hello (1): * SSLv3, TLS handshake, Server hello (2): * SSLv3, TLS handshake, CERT (11): * SSLv3, TLS handshake, Server finished (14): * SSLv3, TLS handshake, Client key exchange (16): * SSLv3, TLS change cipher, Client hello (1): * SSLv3, TLS handshake, Finished (20): * SSLv3, TLS change cipher, Client hello (1): * SSLv3, TLS handshake, Finished (20): * SSL connection using RC4-SHA * Server certificate: * subject: C=US; ST=State; L=City; O=Company; OU=ou1; CN=example.com * start date: 2011-04-17 00:00:00 GMT * expire date: 2012-04-15 23:59:59 GMT * common name: example.com (matched) * issuer: C=US; O=VeriSign, Inc.; OU=VeriSign Trust Network; OU=Terms of use at https://www.verisign.com/rpa (c)10; CN=VeriSign Class 3 Secure Server CA - G3 * SSL certificate verify ok. > HEAD / HTTP/1.1 > User-Agent: curl/7.19.7 (x86_64-pc-linux-gnu) libcurl/7.19.7 OpenSSL/0.9.8k zlib/1.2.3.3 libidn/1.15 > Host: example.com > Accept: */* > < HTTP/1.1 404 Not Found HTTP/1.1 404 Not Found < Cache-Control: must-revalidate,no-cache,no-store Cache-Control: must-revalidate,no-cache,no-store < Content-Type: text/html;charset=ISO-8859-1 Content-Type: text/html;charset=ISO-8859-1 < Content-Length: 1267 Content-Length: 1267 < Server: Jetty(7.2.2.v20101205) Server: Jetty(7.2.2.v20101205) < * Connection #0 to host example.com left intact * Closing connection #0 * SSLv3, TLS alert, Client hello (1): In addition, if I try hitting one of the sites using a self signed cert and the --capath option, it also works. (Let me know if I should post an example of that.) This implies that curl is finding the cert directory, and it is properly hash. Finally, I am able to verify the SSL cert with openssl, using its -CApath option. $ openssl s_client -CApath ./certs/ -connect example.com:443 CONNECTED(00000003) depth=3 /C=US/O=VeriSign, Inc./OU=Class 3 Public Primary Certification Authority verify return:1 depth=2 /C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=(c) 2006 VeriSign, Inc. - For authorized use only/CN=VeriSign Class 3 Public Primary Certification Authority - G5 verify return:1 depth=1 /C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=Terms of use at https://www.verisign.com/rpa (c)10/CN=VeriSign Class 3 Secure Server CA - G3 verify return:1 depth=0 /C=US/ST=State/L=City/O=Company/OU=ou1/CN=example.com verify return:1 --- Certificate chain 0 s:/C=US/ST=State/L=City/O=Company/OU=ou1/CN=example.com i:/C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=Terms of use at https://www.verisign.com/rpa (c)10/CN=VeriSign Class 3 Secure Server CA - G3 --- Server certificate -----BEGIN CERTIFICATE----- <cert removed> -----END CERTIFICATE----- subject=/C=US/ST=State/L=City/O=Company/OU=ou1/CN=example.com issuer=/C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=Terms of use at https://www.verisign.com/rpa (c)10/CN=VeriSign Class 3 Secure Server CA - G3 --- No client certificate CA names sent --- SSL handshake has read 1563 bytes and written 435 bytes --- New, TLSv1/SSLv3, Cipher is RC4-SHA Server public key is 2048 bit Secure Renegotiation IS NOT supported Compression: NONE Expansion: NONE SSL-Session: Protocol : TLSv1 Cipher : RC4-SHA Session-ID: D65C4C6D52E183BF1E7543DA6D6A74EDD7D6E98EB7BD4D48450885188B127717 Session-ID-ctx: Master-Key: 253D4A3477FDED5FD1353D16C1F65CFCBFD78276B6DA1A078F19A51E9F79F7DAB4C7C98E5B8F308FC89C777519C887E2 Key-Arg : None Start Time: 1303258052 Timeout : 300 (sec) Verify return code: 0 (ok) --- QUIT DONE How can I get curl to verify this cert using the --capath option?

    Read the article

  • Using 64 bit wuauclt from 32 bit command prompt

    - by Tim Brigham
    I have a script that for legacy reasons needs to run inside a 32 bit command shell. This script also includes references to certain core windows binaries - most notably wuauclt but others as well - which are not accessible by default within the 32 bit environment. This script is being run in several locations including many windows 7 and server 2008 r2 boxes. I'm aware of the possibility to copy files from the system32 to syswow64 in order to get around this. Is there any better method - something along the lines of adding an entry to the path variable - which will allow me to fall back to these 64 bit binaries from within a 32 bit script?

    Read the article

  • need advice for storing data setup hardware for client with 80TB per year of data footprint increase

    - by dasko
    hi everyone, i currently have a client that will be adding replicated data from satellite locations in the number of approximately 80TB per year. with this said in year 2 we will have 160TB and so on year after year. i want to do some sort of raid 10 or raid 6 setup. i want to keep the servers to approximately 4u high and rack mounted. all suggestions welcome on a replication strategy. we will be wanting to have one instance of the data in house and the other to be co-located (any suggestions on co-locate sites too?). the obvious hardware will be something like a rack mount server with hot swap trays and dual xeon based type processors. the use of the data is for archives of information, files will be made up of small file sizes. i can add or expand to this question if it is too vague. thanks for looking.

    Read the article

  • Servers at remote sites vs. centralized servers?

    - by Boden
    Looking for some opinions here. We've got three physical locations and site-to-site VPN between all three. Currently we've got Windows domain controllers at each location, with roughly 50 clients at each. The domains are currently separate, and we're looking at integrating the three sites. Email (Exchange) will be located at the primary site, and RPD is already being used at the secondary branches to hit the app servers also located at the primary site. The bulk of the local user load at the other two sites is just file sharing. What would the main benefits and drawbacks be of replacing the local domain controllers with NAS devices, and only keeping the domain controller(s) at the primary site? (assuming upgrades are coming regardless) Under what circumstances would you choose one setup over the other?

    Read the article

  • excel 2010 search function?

    - by Tom
    can a cell A1:A200 be searched for a "name" then once found, imput the cell location into a formula? such as find "tom"(a1:a200), [found location at cell a22] IF(a22),=IF(MINUTE(Auto_Agent!G27)+(SECOND(Auto_Agent!G27))=0,"",(MINUTE(Auto_Agent!G27)*60+(SECOND(Auto_Agent!G27)))) the problem I'm having is each time I import data names can be in different cell locations depending on who is working that day. example: Agent: Tom 07:59:49 02:31:04 00:00:00 00:42:44 01:33:02 00:00:43 00:02:00 03:09:05 Avg Skillset Talk Time: 00:06:52 07:59:49 02:31:04 00:00:00 00:42:44 01:33:02 00:00:43 00:02:00 03:09:05 () 9/19/2012 Avg Skillset Talk Time: 00:06:52 07:59:49 02:31:04 00:00:00 00:42:44 01:33:02 00:00:43 00:02:00 03:09:05 Agent: Bill 07:59:49 02:31:04 00:00:00 00:42:44 01:33:02 00:00:43 00:02:00 03:09:05 Avg Skillset Talk Time: 00:06:52 07:59:49 02:31:04 00:00:00 00:42:44 01:33:02 00:00:43 00:02:00 03:09:05 () 9/19/2012 Avg Skillset Talk Time: 00:06:52 07:59:49 02:31:04 00:00:00 00:42:44 01:33:02 00:00:43 00:02:00 03:09:05

    Read the article

  • Localized database for customers

    - by Jim
    The company I work for, has just moved to AWS - and currently they have one very large central database with the instance currently located in America. However, one of their clients has requested that all of their data is held in the EU. So creating an AWS instance in Ireland isn't a problem, the problem is the database and how to manage it. We were considering having another database that runs in the EU for European customers, and use a different primary key step, so that the primary keys will never conflict in case the two locations need to be merged in the future. The problem is, if we have a customer that uses our system in both America, and the EU we would have to create 2 accounts for that user, and reporting across both regions would not be possible as the connection time would be too high. Is there an alternative to set this up?

    Read the article

  • Nginx + PHP-FPM on Centos 6.5 gives me 502 Bad Gateway (fpm error: unable to read what child say: Bad file descriptor)

    - by Latheesan Kanes
    I am setting up a standard LEMP stack. My current setup is giving me the following error: 502 Bad Gateway This is what is currently installed on my server: Here's the configurations I've created/updated so far, can some one take a look at the following and see where the error might be? I've already checked my logs, there's nothing in there (http://i.imgur.com/iRq3ksb.png). And I saw the following in /var/log/php-fpm/error.log file. sidenote: both the nginx and php-fpm has been configured to run under a local account called www-data and the following folders exits on the server nginx.conf global nginx configuration user www-data; worker_processes 6; worker_rlimit_nofile 100000; error_log /var/log/nginx/error.log crit; pid /var/run/nginx.pid; events { worker_connections 2048; use epoll; multi_accept on; } http { include /etc/nginx/mime.types; default_type application/octet-stream; # cache informations about FDs, frequently accessed files can boost performance open_file_cache max=200000 inactive=20s; open_file_cache_valid 30s; open_file_cache_min_uses 2; open_file_cache_errors on; # to boost IO on HDD we can disable access logs access_log off; # copies data between one FD and other from within the kernel # faster then read() + write() sendfile on; # send headers in one peace, its better then sending them one by one tcp_nopush on; # don't buffer data sent, good for small data bursts in real time tcp_nodelay on; # server will close connection after this time keepalive_timeout 60; # number of requests client can make over keep-alive -- for testing keepalive_requests 100000; # allow the server to close connection on non responding client, this will free up memory reset_timedout_connection on; # request timed out -- default 60 client_body_timeout 60; # if client stop responding, free up memory -- default 60 send_timeout 60; # reduce the data that needs to be sent over network gzip on; gzip_min_length 10240; gzip_proxied expired no-cache no-store private auth; gzip_types text/plain text/css text/xml text/javascript application/x-javascript application/xml; gzip_disable "MSIE [1-6]\."; # Load vHosts include /etc/nginx/conf.d/*.conf; } conf.d/www.domain.com.conf my vhost entry ## Nginx php-fpm Upstream upstream wwwdomaincom { server unix:/var/run/php-fcgi-www-data.sock; } ## Global Config client_max_body_size 10M; server_names_hash_bucket_size 64; ## Web Server Config server { ## Server Info listen 80; server_name domain.com *.domain.com; root /home/www-data/public_html; index index.html index.php; ## Error log error_log /home/www-data/logs/nginx-errors.log; ## DocumentRoot setup location / { try_files $uri $uri/ @handler; expires 30d; } ## These locations would be hidden by .htaccess normally #location /app/ { deny all; } ## Disable .htaccess and other hidden files location /. { return 404; } ## Magento uses a common front handler location @handler { rewrite / /index.php; } ## Forward paths like /js/index.php/x.js to relevant handler location ~ .php/ { rewrite ^(.*.php)/ $1 last; } ## Execute PHP scripts location ~ \.php$ { try_files $uri =404; expires off; fastcgi_read_timeout 900; fastcgi_pass wwwdomaincom; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } ## GZip Compression gzip on; gzip_comp_level 8; gzip_min_length 1000; gzip_proxied any; gzip_types text/plain application/xml text/css text/js application/x-javascript; } /etc/php-fpm.d/www-data.conf my php-fpm pool config ## Nginx php-fpm Upstream upstream wwwdomaincom { server unix:/var/run/php-fcgi-www-data.sock; } ## Global Config client_max_body_size 10M; server_names_hash_bucket_size 64; ## Web Server Config server { ## Server Info listen 80; server_name domain.com *.domain.com; root /home/www-data/public_html; index index.html index.php; ## Error log error_log /home/www-data/logs/nginx-errors.log; ## DocumentRoot setup location / { try_files $uri $uri/ @handler; expires 30d; } ## These locations would be hidden by .htaccess normally #location /app/ { deny all; } ## Disable .htaccess and other hidden files location /. { return 404; } ## Magento uses a common front handler location @handler { rewrite / /index.php; } ## Forward paths like /js/index.php/x.js to relevant handler location ~ .php/ { rewrite ^(.*.php)/ $1 last; } ## Execute PHP scripts location ~ \.php$ { try_files $uri =404; expires off; fastcgi_read_timeout 900; fastcgi_pass wwwdomaincom; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } ## GZip Compression gzip on; gzip_comp_level 8; gzip_min_length 1000; gzip_proxied any; gzip_types text/plain application/xml text/css text/js application/x-javascript; } I've got a file in /home/www-data/public_html/index.php with the code <?php phpinfo(); ?> (file uploaded as user www-data).

    Read the article

  • cURL looking for CA in the wrong place

    - by andrewtweber
    On Redhat Linux, in a PHP script I am setting cURL options as such: curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, True); curl_setopt($ch, CURLOPT_CAINFO, '/home/andrew/share/cacert.pem'); Yet I am getting this exception when trying to send data (curl error: 77) error setting certificate verify locations: CAfile: /etc/pki/tls/certs/ca-bundle.crt CApath: none Why is it looking for the CAfile in /etc/pki/tls/certs/ca-bundle.crt? I don't know where this folder is coming from as I don't set it anywhere. Shouldn't it be looking in the place I specified, /home/andrew/share/cacert.pem? I don't have write permission /etc/ so simply copying the file there is not an option. Am I missing some other curl option that I should be using? (This is on shared hosting - is it possible that it's disallowing me from setting a different path for the CAfile?)

    Read the article

  • Choosing Truecrypt volume names and keyfile names

    - by Howiecamp
    Any recommendations on what to name Truecrypt volumes (container files) and where to locate them? Certainly a name like "this is a truecrypt volume.tc" isn't a good idea. Any recommended storage locations? Same question for keyfiles that are generated with Truecrypt. Finally, lets say you choose an existing file, ymca.mp3, as your keyfile. Given that that file is innocuous and normal looking, isn't it easy to forget that's your key file so when you get sick of the Village People and delete the song you're hosed?

    Read the article

  • How to reliably synchronise file servers between London and Shanghai?

    - by Andy S
    We have two offices, one in London and one in Shanghai, each needing to be able to access the same set of files. This means we need a solid, speedy means of synchronising a set of folders between servers at either office. They're likely to be Windows servers, but we could look at Linux boxes if the software side makes more sense on *nix. We've considered Rsync, Unison, Gluster, and a few other options, but none of them seem capable of reliably keeping the servers in sync between such distant office locations. Each office is on DSL connectivity over the open internet, so encryption is also a factor. Does anyone have any hints for getting the servers synchronising in as close to real time as possible, without dying constantly? Andy

    Read the article

  • Why do some web servers not respond to icmp requests?

    - by John Himmelman
    What is the purpose of blocking/dropping inbound ICMP traffic on a public web server? Is it common for it be blocked? I had to test if a server was accessible from various locations (tested on various servers located in different states/countries). I'd rely on ping as a quick & reliable method of determining if a server was online/network-accessible. After not receiving a response on a couple boxes, I tried using lynx to load the site, and it worked.

    Read the article

  • Organizing files relationally in Windows 7?

    - by Cayetano Gonçalves
    I just took a new job as a policy analyst, and after even one week keeping track of hundreds of files- lawsuits, legislation, letters, etc- in Windows 7 is proving difficult. In my last job I was a database architect and I helped build Linux based servers to track files among an entire department, however there is no way for me to do that at this time in this job. Is there any way to track files/indices/locations/tags/themes and store them in some kind of RDBMS system, instead of storing the files in folders that only allow for flat and fixed storage? For example, if I have a file that deals with: ELID organization Appeals court John Smith It really is inconvenient to have to decide which one of these tags to create into a folder and place the file into it, when it falls under all the categories. Even if I could place tags the way you can in Stack Exchange on files, it would solve a lot of heart ache.

    Read the article

  • Get-QADComputer -LdapFilter & NOT operator

    - by dboftlp
    I'm having issues excluding an OU from my LDAP filter $DaysAgo = (Get-Date).AddDays(-31) $ft = $DaysAgo.ToFileTime() Get-QADComputer -SizeLimit 0 -IncludeAllProperties -SearchRoot 'DC=My,DC=Domain,DC=Local' -LdapFilter "(&(objectcategory=computer)(lastLogonTimeStamp<=$ft) (!(ou:dn:=DisabledPCs))(|(operatingsystem=Windows 2000 Professional) (operatingSystem=Windows XP*)(operatingSystem=Windows 7*) (operatingSystem=Windows Vista*)(operatingsystem=Windows 2000 Server) (operatingsystem=Windows Server*)))" I'm looking to query for all Windows OS systems that haven't logged in to AD for more than 31 days & that are not already in the OU "DisabledPCs", which is where I'll be moving them to. When I run it now, I'm getting all the systems I'm looking for, including those in the "DisabledPCs" OU... I've tried several variations including: (&(!(ou:dn:=DisabledPCs))) As well as putting it in different locations in the filter (not that I thought it would make a difference, but I obviously don't know that...) Thanks in advance for any help, -dboftlp

    Read the article

  • cygwin rsync over ssh very slow

    - by Waleed Hamra
    I have 2 machines running Windows Xp SP3. I have cygwin installed on both, version 1.7. I have rsync and ssh installed on both, and configured using default settings as per ssh-host-config and ssh-user-config programs provided. I moved the public keys into their respective locations, and basically ssh is working fine. i began an rsync operation, using: rsync -av --delete --hard-links local_dir username@other_machine:/some_dir well... on both machines, the processor is running near idle, no heavy usage. I checked IO using process explorer on both machines, and that too is at normal levels (1~2 MB/s), so I can't see where the bottlenecks are, because network performance is aweful. I'm not going over 1MB/s... when a normal file copy using windows sharing achieves some ~10 MB/s.. What could be wrong?

    Read the article

  • PHP does not allow https connections

    - by FunkyChicken
    Hey guys im running PHP 5.4.0 and I cannot cURL nor files_get_content() https connections. Using curl in a PHP script shows: [root@ns1]# /opt/php/bin/php -q test.php * About to connect() to www.google.com port 443 * Trying 74.125.225.210... * connected * Connected to www.google.com (74.125.225.210) port 443 * successfully set certificate verify locations: * CAfile: /etc/pki/tls/certs/ca-bundle.crt CApath: none Segmentation fault Using file_get_contents() shows: Warning: file_get_contents(): Unable to find the wrapper "https" - did you forget to enable it when you configured PHP? in /test.php OpenSSL and OpenSSL-devel are installed, and PHP is also configured with cURL support for SSL connections. See: http://i.imgur.com/ExAIf.png Any idea what might be going wrong? Further info: CentOS 5.8(64) with Nginx 1.2.4

    Read the article

< Previous Page | 106 107 108 109 110 111 112 113 114 115 116 117  | Next Page >