Search Results

Search found 19278 results on 772 pages for 'enum support'.

Page 617/772 | < Previous Page | 613 614 615 616 617 618 619 620 621 622 623 624  | Next Page >

  • most reliable linux terminal app / general procedures for process stability

    - by intuited
    I've been using konsole (KDE 4.2) for a while now but it crashed recently. Konsole is efficiently designed to use one instance for all of the windows for your entire X session. Extra-unfortunately, because of this ingenuity the crash brought down all the humpty-dumptys and their bashes and their bashes' applications and all the begattens' begattens all the way down to Jebodiah Springfield into one big flat nonexistent omelette. The fact that this app is capable of crashing under any circumstances is pretty disappointing. Although KDE 4.2 is not expected to be entirely stable -- and yes, I know, I should update my distro -- it's still a no-sell for me, since if at all possible, this sort of thing Shouldn't Happen to something that's likely to be a foundation for an entire working environment. Maybe this is arrogant and unrealistic, but if it's possible to have something more stable, I want it. So other than running under screen -- which is fun, nifty, and thus far flawless in its reliability, but which has some issues with not understanding certain keycodes -- I'm looking for ways to improve my environment's reliability. The most obvious strategy is to cast about for a more reliable console app. A standard featureset -- which to me includes tabbed windows, Unicode support, and a decent level of keyboard shortcut configuration -- is pretty much essential. I'm currently running gnome-terminal and roxterm, both of which have acceptable featuresets (pretty much identical, actually; I think rox is actually the superset), and neither of which have provided me with extensive, objective reliability data. Not that they were expected to. Other strategies are also welcome. Were I responding to this question I would perhaps suggest backgrounding critical tasks with & and/or disowning them so they don't come down with the global pandemic. And stuff like that.

    Read the article

  • How can a Virtual PC with Win XP install the East Asian Language? (or does any browser come with Chi

    - by Jian Lin
    How can a Virtual PC with Win XP on it install the East Asian Language? (or does any browser come with Chinese fonts?) After setting up a virtual PC with Win XP, if Chinese font is needed, then the usual way is to go to the Control Panel, select "Regional and Language" and go to the second tab and check the box "Install Files for East Asian Languages". After clicking OK, it asks for the file cplexe.exe on the XP SP3 CD 3... and is said to be about 230MB... In such case, how can the language pack or fonts be installed? (Update: I found that it is true for Window 7's Virtual PC with XP on it, as well as the XP SP3 with IE 8 that can be downloaded in the link below.) (I downloaded the virtual hard drive file .vhd from http://www.microsoft.com/downloads/details.aspx?FamilyId=21EABB90-958F-4B64-B5F1-73D0A413C8EF&displaylang=en so there is no "CD 3"... there) Or does any browser come with all the unicode fonts without needing the OS to support it?

    Read the article

  • What registry key or windows file determines where monitors are placed in a multi monitor environmen

    - by dfree
    So I have a laptop with a usb to vga adapter (http://www.startech.com/item/USB2VGAE2-USB-VGA-External-Multi-Monitor-Video-Adapter.aspx) which allows me to add a third monitor to my laptop (the second monitor uses the onboard slot) It worked fine on windows vista - you could go into windows display settings and windows would recognize the third monitor and let you drag it around accordingly. With windows 7, the third monitor literally is not there in windows display settings. The driver allows you to display to the third monitor, but you can't move where it is. The display settings are misplaced relative to my other two (if you drag windows over to it, they end up on the bottom when it should be aligned). I called tech support and they said that there isn't a driver with this functionality for windows 7 yet. But here's my hunch. The monitor placement is still somewhat similar to where I had it on vista, it's just off about 500 pixels or so. I think there is either a registry key or driver file somewhere that is telling this monitor where to exist. If I could just modify the number and move it up 500 pixels, it would be in the right place and I don't have to wait 6 months for the company to come out with a new driver. Any ideas???

    Read the article

  • Uploadify Flash Uploader and Random UPLOAD_ERR_CANT_WRITE errors

    - by dcneiner
    I am using Uploadify to provide progress bar support for file uploads on a PHP app I built. It works perfectly for a few uploads,then every few uploads it fails and the data from the $_FILES array reveals an UPLOAD_ERR_CANT_WRITE error. (Error code 7). I ran Paros proxy between my browser and the server to see the difference between a passing and failing request. The only difference was the content separator for the multi-part post which changes every time. I would conclude this was fully a server error, except with a plain jane form, I cannot reproduce the error. I am not a server guy, so please let me know what information is needed to troubleshoot this and I will update the question with those details. I did place these lines in the .htaccess, but to know avail. The site is hosted on Rackspace Cloudsites so my configuration options are limited: <IfModule mod_security.c> SecFilterEngine Off SecFilterScanPOST Off </IfModule> php_value upload_max_filesize 10M php_value post_max_size 10M php_value max_execution_time 200 php_value max_input_time 200

    Read the article

  • unable to boot into safe mode even after fixing registry

    - by Anirudh Goel
    I have a windows XP sp3 system which is affected by Sality Worm. The usual symptoms of taskmanager and regedit disabled were there, and i saw that i was unable to boot my system in safe mode. Then i found that the sality worm removes the SAFEBOOT keys from registry hive. So i downloaded this reg file from http://support.kaspersky.com/faq/?qid=208279889 and was successfully able to update the reg file to my system. But still when i hit F8 during boot and select safe mode option, it still restarts after loading mup.sys file. i don't know what more to do to get to safe mode. The virus is still there in dormant stage, i can verify that because taskmanager and regedit is not disabled after i restarted in normal mode and i could browse any site and it did not kill the browser process. I also ran the salitykiller from same link above and it healed all infected exe files. This is related to another question which i have asked here,but i don't see how a common solution can solve both of those problems. Any help folks? Thanks

    Read the article

  • Cannot install jdk 1.5 on Ubuntu 12.04

    - by u123
    I have installed Ubuntu 12.04.1 LTS (GNU/Linux 3.2.0-23-generic x86_64). Some info about the machine: $ grep --color "model name" /proc/cpuinfo model name : Intel(R) Xeon(R) CPU E5430 @ 2.66GHz model name : Intel(R) Xeon(R) CPU E5430 @ 2.66GHz model name : Intel(R) Xeon(R) CPU E5430 @ 2.66GHz model name : Intel(R) Xeon(R) CPU E5430 @ 2.66GHz I need to install jdk5 to support an old application. I have tried: ~$ sudo apt-get install openjdk-5-jdk Reading package lists... Done Building dependency tree Reading state information... Done E: Unable to locate package openjdk-5-jdk I have also tried: ~$ sudo apt-get install sun-java5-jdk Reading package lists... Done Building dependency tree Reading state information... Done E: Unable to locate package sun-java5-jdk So its not available in the repos. I have tried to follow this guide (adding the jaunty repos): http://leonardo-pinho.blogspot.dk/2010/11/java-15-no-ubuntu-1010.html but same result. Then I have tried to download jdk-1_5_0_22-linux-i586.bin from here: http://www.oracle.com/technetwork/java/javasebusiness/downloads/java-archive-downloads-javase5-419410.html#jdk-1.5.0_22-oth-JPR and do: ~$ chmod a+x jdk-1_5_0_22-linux-i586.bin ~$ sudo ./jdk-1_5_0_22-linux-i586.bin Sun Microsystems, Inc. Binary Code License Agreement yes Unpacking... Checksumming... 0 0 Extracting... ./jdk-1_5_0_22-linux-i586.bin: 424: ./jdk-1_5_0_22-linux-i586.bin: ./install.sfx.19556: not found ./jdk-1_5_0_22-linux-i586.bin: 1: cd: can't cd to jdk1.5.0_22 Any suggestions?

    Read the article

  • Converting Lighttpd config to NginX with php-fpm

    - by Le Dude
    Having so much issue with NginX configuration since I'm new with NginX. Been using Lighttpd for quite sometime. Here are the base info. New Machine - CentOS 6.3 64 Bit - NginX 1.2.4-1.e16.ngx - Php-FPM 5.3.18-1.e16.remi Old Machine - CentOS 6.2 64Bit - Lighttpd 1.4.25-3.e16 Original Lighttpd config file: ####################################################################### ## ## /etc/lighttpd/lighttpd.conf ## ## check /etc/lighttpd/conf.d/*.conf for the configuration of modules. ## ####################################################################### ####################################################################### ## ## Some Variable definition which will make chrooting easier. ## ## if you add a variable here. Add the corresponding variable in the ## chroot example aswell. ## var.log_root = "/var/log/lighttpd" var.server_root = "/var/www" var.state_dir = "/var/run" var.home_dir = "/var/lib/lighttpd" var.conf_dir = "/etc/lighttpd" ## ## run the server chrooted. ## ## This requires root permissions during startup. ## ## If you run Chrooted set the the variables to directories relative to ## the chroot dir. ## ## example chroot configuration: ## #var.log_root = "/logs" #var.server_root = "/" #var.state_dir = "/run" #var.home_dir = "/lib/lighttpd" #var.vhosts_dir = "/vhosts" #var.conf_dir = "/etc" # #server.chroot = "/srv/www" ## ## Some additional variables to make the configuration easier ## ## ## Base directory for all virtual hosts ## ## used in: ## conf.d/evhost.conf ## conf.d/simple_vhost.conf ## vhosts.d/vhosts.template ## var.vhosts_dir = server_root + "/vhosts" ## ## Cache for mod_compress ## ## used in: ## conf.d/compress.conf ## var.cache_dir = "/var/cache/lighttpd" ## ## Base directory for sockets. ## ## used in: ## conf.d/fastcgi.conf ## conf.d/scgi.conf ## var.socket_dir = home_dir + "/sockets" ## ####################################################################### ####################################################################### ## ## Load the modules. include "modules.conf" ## ####################################################################### ####################################################################### ## ## Basic Configuration ## --------------------- ## server.port = 80 ## ## Use IPv6? ## #server.use-ipv6 = "enable" ## ## bind to a specific IP ## #server.bind = "localhost" ## ## Run as a different username/groupname. ## This requires root permissions during startup. ## server.username = "lighttpd" server.groupname = "lighttpd" ## ## enable core files. ## #server.core-files = "disable" ## ## Document root ## server.document-root = server_root + "/lighttpd" ## ## The value for the "Server:" response field. ## ## It would be nice to keep it at "lighttpd". ## #server.tag = "lighttpd" ## ## store a pid file ## server.pid-file = state_dir + "/lighttpd.pid" ## ####################################################################### ####################################################################### ## ## Logging Options ## ------------------ ## ## all logging options can be overwritten per vhost. ## ## Path to the error log file ## server.errorlog = log_root + "/error.log" ## ## If you want to log to syslog you have to unset the ## server.errorlog setting and uncomment the next line. ## #server.errorlog-use-syslog = "enable" ## ## Access log config ## include "conf.d/access_log.conf" ## ## The debug options are moved into their own file. ## see conf.d/debug.conf for various options for request debugging. ## include "conf.d/debug.conf" ## ####################################################################### ####################################################################### ## ## Tuning/Performance ## -------------------- ## ## corresponding documentation: ## http://www.lighttpd.net/documentation/performance.html ## ## set the event-handler (read the performance section in the manual) ## ## possible options on linux are: ## ## select ## poll ## linux-sysepoll ## ## linux-sysepoll is recommended on kernel 2.6. ## server.event-handler = "linux-sysepoll" ## ## The basic network interface for all platforms at the syscalls read() ## and write(). Every modern OS provides its own syscall to help network ## servers transfer files as fast as possible ## ## linux-sendfile - is recommended for small files. ## writev - is recommended for sending many large files ## server.network-backend = "linux-sendfile" ## ## As lighttpd is a single-threaded server, its main resource limit is ## the number of file descriptors, which is set to 1024 by default (on ## most systems). ## ## If you are running a high-traffic site you might want to increase this ## limit by setting server.max-fds. ## ## Changing this setting requires root permissions on startup. see ## server.username/server.groupname. ## ## By default lighttpd would not change the operation system default. ## But setting it to 2048 is a better default for busy servers. ## ## With SELinux enabled, this is denied by default and needs to be allowed ## by running the following once : setsebool -P httpd_setrlimit on server.max-fds = 2048 ## ## Stat() call caching. ## ## lighttpd can utilize FAM/Gamin to cache stat call. ## ## possible values are: ## disable, simple or fam. ## server.stat-cache-engine = "simple" ## ## Fine tuning for the request handling ## ## max-connections == max-fds/2 (maybe /3) ## means the other file handles are used for fastcgi/files ## server.max-connections = 1024 ## ## How many seconds to keep a keep-alive connection open, ## until we consider it idle. ## ## Default: 5 ## #server.max-keep-alive-idle = 5 ## ## How many keep-alive requests until closing the connection. ## ## Default: 16 ## #server.max-keep-alive-requests = 18 ## ## Maximum size of a request in kilobytes. ## By default it is unlimited (0). ## ## Uploads to your server cant be larger than this value. ## #server.max-request-size = 0 ## ## Time to read from a socket before we consider it idle. ## ## Default: 60 ## #server.max-read-idle = 60 ## ## Time to write to a socket before we consider it idle. ## ## Default: 360 ## #server.max-write-idle = 360 ## ## Traffic Shaping ## ----------------- ## ## see /usr/share/doc/lighttpd/traffic-shaping.txt ## ## Values are in kilobyte per second. ## ## Keep in mind that a limit below 32kB/s might actually limit the ## traffic to 32kB/s. This is caused by the size of the TCP send ## buffer. ## ## per server: ## #server.kbytes-per-second = 128 ## ## per connection: ## #connection.kbytes-per-second = 32 ## ####################################################################### ####################################################################### ## ## Filename/File handling ## ------------------------ ## ## files to check for if .../ is requested ## index-file.names = ( "index.php", "index.rb", "index.html", ## "index.htm", "default.htm" ) ## index-file.names += ( "index.xhtml", "index.html", "index.htm", "default.htm", "index.php" ) ## ## deny access the file-extensions ## ## ~ is for backupfiles from vi, emacs, joe, ... ## .inc is often used for code includes which should in general not be part ## of the document-root url.access-deny = ( "~", ".inc" ) ## ## disable range requests for pdf files ## workaround for a bug in the Acrobat Reader plugin. ## $HTTP["url"] =~ "\.pdf$" { server.range-requests = "disable" } ## ## url handling modules (rewrite, redirect) ## #url.rewrite = ( "^/$" => "/server-status" ) #url.redirect = ( "^/wishlist/(.+)" => "http://www.example.com/$1" ) ## ## both rewrite/redirect support back reference to regex conditional using %n ## #$HTTP["host"] =~ "^www\.(.*)" { # url.redirect = ( "^/(.*)" => "http://%1/$1" ) #} ## ## which extensions should not be handle via static-file transfer ## ## .php, .pl, .fcgi are most often handled by mod_fastcgi or mod_cgi ## static-file.exclude-extensions = ( ".php", ".pl", ".fcgi", ".scgi" ) ## ## error-handler for status 404 ## #server.error-handler-404 = "/error-handler.html" #server.error-handler-404 = "/error-handler.php" ## ## Format: <errorfile-prefix><status-code>.html ## -> ..../status-404.html for 'File not found' ## #server.errorfile-prefix = "/srv/www/htdocs/errors/status-" ## ## mimetype mapping ## include "conf.d/mime.conf" ## ## directory listing configuration ## include "conf.d/dirlisting.conf" ## ## Should lighttpd follow symlinks? ## server.follow-symlink = "enable" ## ## force all filenames to be lowercase? ## #server.force-lowercase-filenames = "disable" ## ## defaults to /var/tmp as we assume it is a local harddisk ## server.upload-dirs = ( "/var/tmp" ) ## ####################################################################### ####################################################################### ## ## SSL Support ## ------------- ## ## To enable SSL for the whole server you have to provide a valid ## certificate and have to enable the SSL engine.:: ## ## ssl.engine = "enable" ## ssl.pemfile = "/path/to/server.pem" ## ## The HTTPS protocol does not allow you to use name-based virtual ## hosting with SSL. If you want to run multiple SSL servers with ## one lighttpd instance you must use IP-based virtual hosting: :: ## ## $SERVER["socket"] == "10.0.0.1:443" { ## ssl.engine = "enable" ## ssl.pemfile = "/etc/ssl/private/www.example.com.pem" ## server.name = "www.example.com" ## ## server.document-root = "/srv/www/vhosts/example.com/www/" ## } ## ## If you have a .crt and a .key file, cat them together into a ## single PEM file: ## $ cat /etc/ssl/private/lighttpd.key /etc/ssl/certs/lighttpd.crt \ ## > /etc/ssl/private/lighttpd.pem ## #ssl.pemfile = "/etc/ssl/private/lighttpd.pem" ## ## optionally pass the CA certificate here. ## ## #ssl.ca-file = "" ## ####################################################################### ####################################################################### ## ## custom includes like vhosts. ## #include "conf.d/config.conf" #include_shell "cat /etc/lighttpd/vhosts.d/*.conf" ## ####################################################################### ####################################################################### ### Custom Added by me #url.rewrite-once = (".*\.(js|ico|gif|jpg|png|css|jar|class)$" => "$0", "" => "/index.php") url.rewrite-once = ( ".*\?(.*)$" => "/index.php?$1", "^/js/.*$" => "$0", "^.*\.(js|ico|gif|jpg|png|css|swf |jar|class)$" => "$0", "" => "/index.php" ) # expire.url = ( "" => "access 1 days" ) include "myvhost-vhosts.conf" ####################################################################### Here is my Vhost file for lighttpd $HTTP["host"] =~ "192.168.8.35$" { server.document-root = "/var/www/lighttpd/qc41022012/public" server.errorlog = "/var/log/lighttpd/error.log" accesslog.filename = "/var/log/lighttpd/access.log" server.error-handler-404 = "/e404.php" } and here is my nginx.conf file user nginx; worker_processes 5; error_log /var/log/nginx/error.log warn; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/testsite/logs/access.log main; sendfile on; #tcp_nopush on; keepalive_timeout 65; #gzip on; # include /etc/nginx/conf.d/*.conf; ## I added this ## include /etc/nginx/sites-available/*; } Here is my NginX Vhost file server { server_name 192.168.8.91; access_log /var/log/nginx/myapps/logs/access.log; error_log /var/log/nginx/myapps/logs/error.log; root /var/www/html/myapps/public; location / { index index.html index.htm index.php; } location = /favicon.ico { return 204; access_log off; log_not_found off; } # location ~ \.php$ { # try_files $uri /index.php; # include /etc/nginx/fastcgi_params; # fastcgi_pass 127.0.0.1:9000; # fastcgi_index index.php; # fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; # fastcgi_param SCRIPT_NAME $fastcgi_script_name; location ~ \.php.*$ { rewrite ^(.*.php)/ $1 last; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; include fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; # fastcgi_intercept_errors on; # fastcgi_param SCRIPT_FILENAME $document_root/index.php; # fastcgi_param PATH_INFO $uri; # fastcgi_pass 127.0.0.1:9000; # include fastcgi_params; } } We have a custom apps that we created that works great with lighttpd. I went through some headache also when we were trying to figure out how to make it work with lighttpd. this is the line that helps make it work in lighttpd. url.rewrite-once = ( ".*\?(.*)$" => "/index.php?$1", "^/js/.*$" => "$0", "^.*\.(js|ico|gif|jpg|png|css|swf |jar|class)$" => "$0", "" => "/index.php" ) but I couldn't figure out how to make it works in NginX. The webserver run just fine when we use the phpinfo.php test file. However as soon as I point it to my apps, nothing comes up. Check the error.log file and there's no error. Very mind boggling. I spent over 1 week trying to figure it out with no luck.. Please help?

    Read the article

  • Multiple contacts with shared information

    - by Keith Thompson
    Background: I currently have several hundred contacts, synchronized between a Microsoft Exchange server and several mobile devices. I also save exported copies of the contacts in .vcf format. Is there a good way (application, file format, whatever) to maintain contacts with shared information? A very common scenario is that I have contacts for two or more people who live in the same house, for example: John Doe 123 Main Street, Anytown USA Home: 555-555-1111 Work: 555-555-2222 Mobile: 555-555-3333 E-mail: [email protected] Jane Doe 123 Main Street, Anytown USA Home: 555-555-1111 Work: 555-555-4444 Mobile: 555-555-5555 E-mail: [email protected] As you can see, both contacts have the same home address and phone number, but distinct names and work and mobile phone numbers. (Other information might also be either shared or distinct.) The applications and file formats I'm familiar with don't seem to have a good way to deal with this. If I use a single "John & Jane Doe" contact for both, it's difficult to distinguish the distinct information (if I want to call Jane's mobile phone rather than John's). If I use a separate contact for each, I have to remember to update both of them (or all of them for N 2) when they move or change their home phone number. An ideal solution would let me create a record containing information for their household, and have each of their contact records contain a reference to the household record, so that when I view John's contact record I see both shared and distinct information. Is there anything out there that has good support this kind of thing? (I would think there would be, since it's a very common scenario.) (I suppose I could roll my own system that generates merged .vcf files from some extended format, but that wouldn't play well with synchronizing across multiple devices.)

    Read the article

  • "Steam needs to be online to update" - 404 fetching *_osx.zip.*

    - by Chris Boyle
    Since yesterday evening, when I launch Steam on OSX, a self-update progress bar appears instead (at 0 of 30MB or so). This bar does not advance, an error dialog appears: Steam needs to be online to update Please confirm your network connection and try again. The app then exits. This happens whether wifi or ethernet or both are connected, and pings to the outside world succeed throughout. If I look at the logs in Console, they are very similar to this example (though that's not mine). Specifically: Success! http://store.steampowered.com/public/client/steam_client_osx?date=718277 [...] Failed! http://cdn.store.steampowered.com/public/client/breakpad_osx.zip.27f59114a86fcd50533e1d7b128f9300947f9969 Failed! http://cdn.store.steampowered.com/public/client/steam_osx.zip.11a99384214805f2dd3be5084ba6be61d662f8ac Failed! http://cdn.store.steampowered.com/public/client/miles_osx.zip.d9fb546541f59c1fdd03962a605236b1021abab8 Requesting the first URL successfully returns some data including the filenames of the latter three, and requesting any of those gives me a 404 (I've tried multiple clients on multiple continents). Searches on Google and Twitter show about 10-20 others having this problem in the past 24 hours, but hardly the angry mob I'd expect if the problem affected all Steam OSX users. Things that have already been tried with no effect: Switching between wifi and ethernet. Killing all Steam processes including ipcserver. Moving the ~/Library/Application Support/Steam/registry.vdf file away. Requesting those URLs with other clients and from other locations. Interesting: that first URL with the date parameter returns the same content even without that parameter (thus would lead to the same 404s) suggesting that the problem is not necessarily specific to coming from a particular currently-installed version of Steam.

    Read the article

  • postfix Mail filters not running behind a controlled enviornment

    - by Ashish
    Hi, I have deployed a postfix server for email receiving. On this I have configured SenderID + SPF milter, by referring to http://www.postfix.org/MILTER_README.html The command that I used is as follows: ./sid-filter -u postfix -p inet:10027@localhost -l Following are my settings in main.cf file: #Milter support for smtpd mail smtpd_milters = inet:localhost:10027, inet:localhost:10028 # Milters for non-SMTP mail. non_smtpd_milters = inet:localhost:10027, inet:localhost:10028 milter_default_action = reject # Postfix . 2.6 #milter_protocol = 6 # 2.3 . Postfix . 2.5 milter_protocol = 2 Now I have this observation: One of the postfix that is setup on AWS CentOS 5.5 is working fine and is able to receive mails on defined mx record. One of the similar postfix(as in step 1) that is setup behind one of the corporate firewalls is not able to receive any mails and is giving following type of error logs: connect from xxxxxx.austin.hp.com[xx.xxx.96.198] May 25 13:20:02 g2t0385g postfix/smtpd[11733]: C11F9B0194: client=xxxxxxx.austin.hp.com[15.217.96.198] May 25 13:20:03 g2t0385g postfix/cleanup[11814]: C11F9B0194: message-id= May 25 13:20:03 g2t0385g postfix/cleanup[11814]: C11F9B0194: milter-reject: END-OF-MESSAGE from xxxxxx.austin.hp.com[xx.xxx.96.198]: 5.7.1 Command rejected; from= to= proto=ESMTP helo= Here the 'sid-filter' is giving problems. Any idea, what I am doing wrong? Please help. Thanks in advance Ashish Sharma

    Read the article

  • PXE bootable image for terminal server?

    - by HeavenCore
    We have 300 windows xp machines on cruddy old hardware across the company. With extended support for XP ending April next year we're looking into our options. Couple of options: Replace the 300 PC's with full windows 7 PC's (£100k +?) - no use of terminal server (our current model) Replace the 300 PC's with off the shelf thin clients & make use of our terminal server - Cheaper clients but Terminal Server CALS required? Keep the 300 PC's, replace windows XP with linux thin client capable of connecting to our terminal server - no hardware costs, just Terminal Server CALS required? Keep the 300 PC's - remove hard drives and make use of a PXE bootable "thin client" to connect to our terminal server If we were to choose option 4, what our the options out there? Is there any official PXE bootable thin clients for terminal server out there? If so, what are the licence requirements? Is there options we haven’t considered? There must be lots of companies out there in this situation - curious what the current trend is for this problem? Edit: Option 5 - Create a bootable Windows PE image with RDP auto start and use that as a "thin client" for our terminal server - is Windows PE licence free in such a model?

    Read the article

  • Last (I think and hope) problems configuring SSL certificate with Apache and VirtualHosts

    - by user65567
    Finally I set apache2 to get a single certificate for all subdomains. [...] # Go ahead and accept connections for these vhosts # from non-SNI clients SSLStrictSNIVHostCheck off # Apache setup which will listen for and accept SSL connections on port 443. Listen 443 # Listen for virtual host requests on all IP addresses NameVirtualHost *:443 # Because this virtual host is defined first, it will # be used as the default if the hostname is not received # in the SSL handshake, e.g. if the browser doesn't support # SNI. <VirtualHost *:443> ServerName domain.localhost DocumentRoot "/Users/<my_user_name>/Sites/domain/public" <Directory "/Users/<my_user_name>/Sites/domain/public"> Order allow,deny Allow from all </Directory> # SSL Configuration SSLEngine on ... </VirtualHost> <VirtualHost *:443> ServerName subdomain1.domain.localhost DocumentRoot "/Users/<my_user_name>/Sites/subdomain1/public" <Directory "/Users/<my_user_name>/Sites/subdomain1/public"> Order allow,deny Allow from all </Directory> # SSL Configuration SSLEngine on ... </VirtualHost> <VirtualHost *:443> ServerName subdomain2.domain.localhost DocumentRoot "/Users/<my_user_name>/Sites/subdomain2/public" <Directory "/Users/<my_user_name>/Sites/subdomain2/public"> Order allow,deny Allow from all </Directory> # SSL Configuration SSLEngine on ... </VirtualHost> So, for example, I can correctly access https://subdomain1.domain.localhost https://subdomain2.domain.localhost ... Now, anyway, I have problems on accessing http://subdomain1.domain.localhost http://subdomain2.domain.localhost ... Since I use a Mac Os, on accessing the "http: version", I get a default page "Your website." (instead of a error). Why does it happen?

    Read the article

  • Dell Poweredge 2600 RAID Transfer How-to

    - by DCookie
    Help, please! Hardware: Dell Poweredge 2600 PERC 4 SCSI Drives, 1 standalone 3 in a RAID 5 configuration OS: Windows 2000 Server In other words, a fairly old system. Anyway, we are in the process of taking over support for this site. The current tech wants out and is fading from view fast, so we need to solve this problem: The standalone disk (where the OS was) failed. We've replaced the disk, installed the OS, but need to know exactly how to proceed from here. I've never worked with a RAID system before, so I don't want to touch anything without knowing what I'm doing. We are not certain if the site will want us to attempt to recover the array or wait for the old tech to become available. We have replaced the server with a temporary box, and recovered MOST of the data from an online backup service. However, the other tech failed to backup a part of the data and the only copy of it is on this RAID array. Hence, our caution. We have poked minimally around in the boot-up PERC config utility, and it seems to me that that's where we'll need to be to reclaim the array. Another possibility is that there is some Dell software for the RAID controller we need to acquire. Can anyone provide clues as to how to proceed from here? Any help GREATLY appreciated.

    Read the article

  • SSL Certifcate Request s2003 DC CA DNS Name not Avaiable.

    - by Beuy
    I am trying to submit a request for an SSL certificate on a Domain Controller in order to enable LDAP SSL, and having no end of problems. I am following the information provided at http://support.microsoft.com/default.aspx?scid=kb;en-us;321051 & http://adldap.sourceforge.net/wiki/doku.php?id=ldap_over_ssl Steps taken so far: Create Servername.inf with the following information ;----------------- request.inf ----------------- [Version] Signature="$Windows NT$ [NewRequest] Subject = "CN=servername.domain.loc" ; replace with the FQDN of the DC KeySpec = 1 KeyLength = 1024 ; Can be 1024, 2048, 4096, 8192, or 16384. ; Larger key sizes are more secure, but have ; a greater impact on performance. Exportable = TRUE MachineKeySet = TRUE SMIME = False PrivateKeyArchive = FALSE UserProtected = FALSE UseExistingKeySet = FALSE ProviderName = "Microsoft RSA SChannel Cryptographic Provider" ProviderType = 12 RequestType = PKCS10 KeyUsage = 0xa0 [EnhancedKeyUsageExtension] OID=1.3.6.1.5.5.7.3.1 ; this is for Server Authentication ;----------------------------------------------- Create Certificate request by running: certreq -new Servername.inf Servername.req Attempt to submit Certificate request to CA by running: certreq -submit -attrib "CertificateTemplate: DomainController" request.req At which point I get the following error: The DNS name is unavailable and cannot be added to the Subject Alternate Name. 0x8009480f (-2146875377) Trouble shooting steps I have taken so far 1. Modify the Domain Controller Template to supply Subject Name in Request restart Certificate Service, include SAN in Request, same error. 2. Re-installed Certificate Services / IIS / Restarted machine countless times Any help resolving the issue would be greatly appreciated.

    Read the article

  • Adobe Acrobat Pro 9.0 on Windows 7 print to network share gives error

    - by Archit Baweja
    I've recently upgraded a client's workstations to brand new computers, with Windows 7 Professional. The server is still Windows Server 2003. The server has 2-3 file shares that get mapped to users' workstations as drives. The client has also upgraded from Acrobat 6.0 to 9.0 Pro. Since the upgrade, when the client tries to print to the Adobe PDF printer (aka convert something to PDF via the printer interface), it gives an error in the queue if the file is being saved on the network drive. If I instead provide a local path, the file "prints" fine. Additionally, if I change the Adobe PDF printer's settings to "don't spool, print directly to printer", it prints to the network share fine, but then it resets that setting every time. Things I've checked for: Permissions on the network share. The user and the computer has full access. We even gave the "Everyone" ibject full access. Reinstall Adobe Acrobat Pro 9.0 Run updates to upgrade to 9.3.4 Has anyone else bumped into such a problem? The support fellows from Adobe are just taking me around in circles. They don't seem to have a clue either.

    Read the article

  • System Information (msinfo32.exe) Can't Collect Information

    - by ptanne
    I have Windows XP Pro, service pack 1, IE 6 and 32GB of free space, 75GB total. I have had nothing but trouble after trying to install service pack 2 even though I used System Restore. The installation was incomplete and my computer has never been the same. I attempted to install sp2 four or five times and sp3 once, always with the same result. I've tried reinstalling XP Pro but that didn't fix the problem. My XP Pro disk now has a scratch on it and refuses to work. Dell would not replace it stating that my computer was out of warranty. I'm currently trying Reimage which is supposed to return a computer to the original configuration and replace missing or damaged files. Believe it or not, Ripley, it stops in the middle of the operation and, so far, the Reimage techs haven't been able to figure out why. Of the many problems that I still have is that System Information can't collect information. The Help and Support sections that display system info also don't work. Is there some way that I can fix this? I can't afford to throw my computer away, yet. Thank you for listening, Pam Galvin

    Read the article

  • Self-hosting vs. Budget hosting - What are the economics?

    - by cdonner
    My current hosting provider (shared Linux, unlimited domains, < $10 per month, with about 20 sites) has been giving me a lot of grief lately. I am contemplating to just ditch them and repurpose the old Sun V20z that is sitting in my basement rack, and move the hosting in-house, literally. My math goes as follows: my company pays up to $80 a months for my home internet service, which would cover the upgrade from currently Fios to Comcast business internet with 5 static IPs. So this comes free. running the server will cost me about $180/year at the current rate of approx. $.2/kWh my time is free So, it seems that the my net cost of doing this would be about $80 anually, plus the work that goes into setup and maintenance. I will have to get email hosting somewhere, which I do not want to do myself. On the other side of the balance sheet, I'd likely get better uptime than my provider based on recent stats, will not get suspended and don't have to spend hours with customer support. Overall, I am not convinced. Has anybody actually done that? What was your experience, and did it pay off?

    Read the article

  • Boot ISO image from GRUB4DOS on EFI machines

    - by Vladimir Tikhomirov
    I failed with loading ISO image (non-distro) from GRUB2 from USB stick, but found the way how I can boot the GRUB4DOS and then load the image from there. However, it doesn't work all the time and the questions is WHY it doesn't? Environment and loading process: We need to have EFI machine, USB stick, booting ISO, GRUB2 and GRUB4DOS. Last 3 on USB stick. Boot: USB - EFI loader - GRUB2 - GRUB4DOS - ISO image Configuration files To boot GRUB4DOS I use this from grub.cfg: menuentry "image.iso" { linux /syslinux/grub.exe --config-file="/menu.lst" } My menu.lst is here: timeout 20 default 0 title image.iso find --set-root --ignore-floppies --ignore-cd //image.iso map --heads=0 --sectors-per-track=0 //image.iso (hd32) map --hook chainloader (hd32) This works perfectly with Legacy machines. However, when I come to GRUB4DOS, I don't see the menu with image.iso, I see only GRUB command line. That means that my menu.lst didn't load. Why is it like this? Background and ideas I have an idea that GRUB4DOS doesn't recognize my USB stick as a device. I tried the command find and got (hd0,0), (hd0,1), (hd0,2), (rd). When I tried to set root to any of these devices I don't see fat file system, how it was with Legacy machines. The root device is (hd0,0), which has ntfs file system which should be partition with Windows. EFI machines support only GRUB2, so I can't boot GRUB4DOS straight away. Please, don't suggest anything like this, because my image doesn't have kernel. You can imagine that you load HDAT2 or Hiren's boot cd, for example. menuentry "Blancco Blancco5.iso" { set isofile="/image.iso" loopback loop $isofile set root=(loop) linux /isolinux/vmlinuz isofile=$isofile splash quiet initrd /isolinux/initrd }

    Read the article

  • Unable to connect to APNS with java-apns

    - by Mac
    I've got a Java program running on a firewalled server that is intended to send push notifications to my iPhone app by using java-apns. Problem is, whenever I try to send a notification the library fails to connect to the APNS server. From the stack trace, it seems that when creating the required SSL connection, the connection is being refused at some point (a java.net.ConnectException with a detail message of "connection refused" is being thrown when the library calls SSLSocketFactory's createSocket method). It would not surprise me at all if the firewall is blocking the connection, but unfortunately as I do not manage the server I am unable to verify that that is indeed the case. The fact that the program works fine from my (non-firewalled) desktop seems to support the theory. My question is, does anyone know of any method by which I can find the root cause of the problem, and/or can anyone tell me what I should tell the server admin to change to get things to work (if it is indeed the firewall that's the problem)? For reference, the server is a Linux box and I'm using version 0.1.2 of java-apns.

    Read the article

  • Web server with static IP from cable provider

    - by Dmitri
    I have a subscription to 5 static IP addresses. I want to run a web server from behind a router. My network config is as follows: Server's local address is 10.1.10.2, has IIS running on it, port 80 and 443 (IIS is not my fault, had to be done) the server's ip address is static, the subnet mask is 255.255.255.0, gateway is 10.1.10.1, which is the local address of the cable modem / router / gateway thingy. All looks to be in textbook order as far as the LAN goes. I can get to anything on my LAN from any computer on my LAN, whether they have static IP or get it through DHCP from the cable modem/router thingy. however, I have no internet access form any of my LAN computers. I called Comcast tech support and they say they can connect to my modem/router just fine and can actually use it to ping any computer on the internet or any computer on my LAN from the router/modem (i checked, myself, this is in fact the case). However, nothing on my LAN has internet connectivity. I tried pinging the DNS servers, nothing. I tried directly typing in web sites' IP addresses, nothing, so doesn't seem to be a DNS issue. Any Ideas? What malfunction of a router could be causing such weird behavior? nay ideas or educated guesses are very much appreciated.

    Read the article

  • ADSL throughput loss from Reed-Solomon encoding

    - by javano
    I'm reading about ADSL starting here and I am confused by how the Reed-Solomon encoding for ECC is limiting the available transfer rate, as much as it does (nearly half). This pdf on the same subject contains the following; A maximum of 255 sub-carriers can be used to modulate data in the downstream direction. Sub-carrier 256, the downstream Nyquist frequency, and sub-carrier 64, the downstream pilot frequency, are not available for user data, thus limiting the total number of available downstream sub-carriers to 254. Each of these 254 sub-carriers can support the modulation of 0 to 15 bits. Since the ADSL DMT data frame rate is 4000 frames per second, the maximum theoretical downstream data rate of an ADSL system is 15.24Mbps. Due to limitations in system architecture, specifically the maximum allowable Reed-Solomon codeword size (255 bytes), the maximum achievable downstream data rate is 8.16Mbps. How is this nearly halving the throughput? Is all that extra bandwidth overhead of the RS encoding? 15240000 bps (15.24Mbps) - 8160000 bps (8.12Mbps) = 7080000 bps (7.08Mbps). Where has that 7Mbps of throughput gone? EDIT: I tried to read the wiki page on Reed-Soloman but it's all crazy maths and algerbra, which I don't understand. I can understand that data is split into 255 byte codewords, because that maybe the max codeword size whilst still maintaining accuracy during transmission; But I don't understand why that means less data is sent?

    Read the article

  • Mod_Proxy_AJP set up issues

    - by TripWired
    I'm trying to set up Tomcat behind Apache using mod_proxy_ajp. After tons of messing around with the configs I am stuck at a 403 page when trying to access tomcat. I had a 404 before but apparently something I changed along the way fixed that. I'm not sure which setting to change at this point. Could anyone look over the configs I have and see if anything is missing. httpd.conf <IfModule mod_proxy.c> ProxyRequests Off <Proxy *> Order deny,allow Deny from all Allow from localhost </Proxy> proxy_ajp.conf LoadModule proxy_ajp_module modules/mod_proxy_ajp.so # # When loaded, the mod_proxy_ajp module adds support for # proxying to an AJP/1.3 backend server (such as Tomcat). # To proxy to an AJP backend, use the "ajp://" URI scheme; # Tomcat is configured to listen on port 8009 for AJP requests # by default. # # # Uncomment the following lines to serve the ROOT webapp # under the /tomcat/ location, and the jsp-examples webapp # under the /examples/ location. # ProxyPass /tomcat ajp://127.0.0.1:8009/ ProxyPassReverse /tomcat ajp://127.0.0.1:8009/ ProxyPass /examples/ ajp://localhost:8009/jsp-examples/

    Read the article

  • Using dnsmasq for accessing multiple nameservers assigned by DHCP

    - by Ash
    At my work desktop running openSUSE 11.4, I have a local network which gets its address, domain (work.site) and nameservers (10.100.1.1, 10.100.1.2) info through DHCP - which get written into /etc/resolv.conf I get to access the internet using the work network, and these 2 nameservers end up returning the entries for any public domain name lookups on the internet. I also have a private VPN that I end up connecting. The nameserver (10.111.1.1) and domain (private.site) are rarely bound to change for this network, but currently they're pushed by the openVPN client into networkmanager, and which also gets merged with the existing /etc/resolv.conf My resolv.conf ultimately ends up looking like this: search private.site work.site nameserver 127.0.0.1 nameserver 10.111.1.1 nameserver 10.100.1.1 As you can see the 2nd nameserver from my work network was pushed out because of the max 3 entry limitations. It is fine still, but would be a problem if that nameserver goes down for maintenance or something. So I found out that dnsmasq could help me here, and hence I setup dnsmasq just as a local DNS resolver without any DHCP support. So right now this is my /etc/dnsmasq.conf: resolv-file=/etc/resolv.conf server=/private.site/10.111.1.1 server=/1.111.10.in-addr.arpa/10.111.1.1 listen-address=127.0.0.1 bind-interfaces log-queries I've made dnsmasq get the list of nameservers from /etc/resolv.conf since NetworkManager seems to be updating this list correctly (for a max of 3 nameservers). I'm able to resolve the host names in both the networks correctly. So these are the questions I have: Is there a way I can make either NetworkManager or dhclient write out the list of nameservers somewhere else which I can make dnsmasq use as resolv-file ? How do I make dnsmasq use certain nameservers as the default for all queries ? Right now I notice that lookups for public domains on the internet are usually sent to both the nameservers - the one on work.site as well as private.site. It would be good if I can limit this only to work.site.

    Read the article

  • Large virtual memory size of ElasticSearch JVM

    - by wfaulk
    I am running a JVM to support ElasticSearch. I am still working on sizing and tuning, so I left the JVM's max heap size at ElasticSearch's default of 1GB. After putting data in the database, I find that the JVM's process is showing 50GB in SIZE in top output. It appears that this is actually causing performance problems on the system; other processes are having trouble allocating memory. In asking the ElasticSearch community, they suggested that it's "just" filesystem caching. In my experience, filesystem caching doesn't show up as memory used by a particular process. Of course, they may have been talking about something other than the OS's filesystem cache, maybe something that the JVM or ElasticSearch itself is doing on top of the OS. But they also said that it would be released if needed, and that didn't seem to be happening. So can anyone help me figure out how to tune the JVM, or maybe ElasticSearch itself, to not use so much RAM. System is Solaris 10 x86 with 72GB RAM. JVM is "Java(TM) SE Runtime Environment (build 1.7.0_45-b18)".

    Read the article

  • kvm works only when kvm-intel is unloaded

    - by Sathya
    I am new to kvm. I have this strange issue. But before explaining the issue, here is my set up. I try to install VM on my Host which is a Acer Laptop 5720 Has T7500 Intel processor. The cpu flags indicate that Virtualization is supported. I run Ubuntu 10.04 (lucid) on it. It comes with kvm. Now coming to the issue - I dont get any errors while executing "sudo modprobe kvm-intel". So I presume my processor does indeed support hardware virtualization. I use virt-manager and create a VM on which I install ubuntu from an *.iso file. When I start the VM it says it is running. No signs of any trouble. I can see the domain list in "virsh list". But when I try to connect to the VM thru VNC, all I get to see is a blank screen (no cursor). There is no response to any key press. I changed the video mode etc. Tried all different combinations but none work. But strangely, if I shutdown the vm an virt-manager and then unload the module by doing "sudo modprove -r kvm-intel", everything works fine. ie., I can see the screen via VNC. I am able to install the OS and so on. So what does this mean ? IS hardware virtualization not supported ? How come there is no error anywhere ? dmesg | grep kvm doesnt report anything. Can someone throw light on what excatly is happening ?

    Read the article

< Previous Page | 613 614 615 616 617 618 619 620 621 622 623 624  | Next Page >