Search Results

Search found 10951 results on 439 pages for 'high definition'.

Page 234/439 | < Previous Page | 230 231 232 233 234 235 236 237 238 239 240 241  | Next Page >

  • Running a small IPTV station

    - by nixterrimus
    I'm looking to run an iptv station for my dorm. I know I can serve multicast so that's not a problem. The station will serve out podcasts and other cc licensed content. The target endpoint is xbmc- a media center. So far I know that I need to serve an rtp stream over udp that's streaming an mpeg-4 avc main or high profile with aac ( or ac3 ?) audio. I've had some luck using vlc with vlm to stream but it seems limited. What are my other options?  Everything has to run on Linux- hopefully open source. How can I use playlists and not live streams? What are my software options?

    Read the article

  • Connecting to VPN via Proxy

    - by Rodrigo
    Hi, My company's VPN server is located in Netherlands, from my current location this is a really crappy place in terms of connectivity, connection keeps dropping, it's slow and keeps being reset during high traffic times. I have a dedicated server over USA which is able to connect to VPN server without this issues, connection is stable and fast. My question is, how do I connect to this VPN using a proxy running on my dedicated server? I'm on Windows 7 using a VM on XP to connect to the VPN. Thanks.

    Read the article

  • Website Content via remote mysql or API for clients

    - by Kannan
    I would like to know whether there will be problem for shared hosting/vps/dedicated users if they request content via remote mysql or via http API. ? What will be issues if they make every request to production server for delivery of the website articles along with image urls etc. Will it cause disk wait on shared/vps/dedicated hosting users ? Will both remotemysql or http api or curl cause high CPU usage, I/O wait or Memory usage ? What should be best for speed remote mysql or http web api ? or any alternatives ? Thanku Kannan

    Read the article

  • how to best config for synflood setup in csf but web response still fast

    - by Binh Nguyen
    my server down random every day 4-5 time cause get high load very quick.. I have install csf and with some config server now stable.. load around 5. BUT the big isuse is : the real user very hard to access website specially from IE browser you can test at xaluan.com the flowing is config using in csf: SYNFLOOD = "1" SYNFLOOD_RATE = "100/s" SYNFLOOD_BURST = "10" CONNLIMIT = "80;30" PORTFLOOD = "80;tcp;70;5" CT_LIMIT = "29" # other config may same as default i playing around with this config for a week but still not work around.. If increase the rate SYNFLOOD_RATE = "140/s" or more.. the website response very fast.. be side have bad effect of server load increase so fast normal 20 and may be up to few hundred in peck time .. my need is response time fast but load still low.. please help thanks ps: server runing nginx frontend, apache, mysql, php ,, the home page has around 70 elements which will cached in browser in fist time access..

    Read the article

  • Online backup solution

    - by Petah
    I am looking for a backup solution to backup all my data (about 3-4TB). I have look at many services out there, such as: http://www.backblaze.com/ http://www.crashplan.com/ Those services look very good, and a reasonable price. But I am worried about them because of incidents like this: http://jeffreydonenfeld.com/blog/2011/12/crashplan-online-backup-lost-my-entire-backup-archive/ I am wondering if there is any online back solution that offers a service level agreement (SLA) with compensation for data loss at a reasonable price (under $30 per month). Or is there a good solution that offers a high enough level of redundancy to mitigate the risk? Required: Offsite backup to prevent data loss in terms of fire/theft. Redundancy to protect the backup from corruption. A reasonable cost (< $30 per month). A SLA in case the service provider faults on its agreements.

    Read the article

  • What is the best/easiest way to use scripts to analyze network traffic?

    - by yungin
    I'm looking to analyze packets via scripts. I'd like to use something high level. I'm in a mac/linux environment. I'm currently looking at different python+libpcap libraries. Perhaps lua+wireshark too. Maybe tcpdump+bash (but not sure that has a lot of info i can use). I also heard good things about scapy. Not sure. I'm wondering if you have any recommendations? There's quite a few of them out there. What have you found that works best? I'd definitely want something scriptable not something that I need to compile (like c/c++, etc)

    Read the article

  • MySQL 5.5 on Windows server is horribly slow

    - by Brad
    I have had no luck getting MySQL 5.5 to be as fast as 5.1 or MariaDB on the exact same hardware/database/environment under Windows server 2003R2 or 2008R2. My benchmarks from our application: MySQL 5.5 + CentOS 5.2 (XenServer Virtual) = 28 seconds (box is "busy" not buried) MariaDB (5.1) + Windows 2003 (Physical box) = 130 seconds (box is 2% busy) MySQL 5.1 + Windows 2003 (Physical box) = 170 seconds (box is 2% busy) MySQL 5.5 + Windows 2003 (Physical box) = 305 seconds (As high as 600 seconds...) (box is 2% busy) The only difference between these runs is the removal of skip-locking and the running of mysql_upgrade.exe to update some tables for stored procs on 5.5. Yes, I know it's a release candidate, I'm feeding that back to MySQL as well. No slow queries are logged, it doesn't think it's being slow, it just is. I'm going to start tearing into the queries themselves to see if the INSERT/SELECT plans have gone buggo on 5.5. Any help would be appreciated! Thanks

    Read the article

  • solved: puppet master REST API returns 403 when running under passenger works when master runs from command line

    - by Anadi Misra
    I am using the standard auth.conf provided in puppet install for the puppet master which is running through passenger under Nginx. However for most of the catalog, files and certitifcate request I get a 403 response. ### Authenticated paths - these apply only when the client ### has a valid certificate and is thus authenticated # allow nodes to retrieve their own catalog path ~ ^/catalog/([^/]+)$ method find allow $1 # allow nodes to retrieve their own node definition path ~ ^/node/([^/]+)$ method find allow $1 # allow all nodes to access the certificates services path ~ ^/certificate_revocation_list/ca method find allow * # allow all nodes to store their reports path /report method save allow * # unconditionally allow access to all file services # which means in practice that fileserver.conf will # still be used path /file allow * ### Unauthenticated ACL, for clients for which the current master doesn't ### have a valid certificate; we allow authenticated users, too, because ### there isn't a great harm in letting that request through. # allow access to the master CA path /certificate/ca auth any method find allow * path /certificate/ auth any method find allow * path /certificate_request auth any method find, save allow * path /facts auth any method find, search allow * # this one is not stricly necessary, but it has the merit # of showing the default policy, which is deny everything else path / auth any Puppet master however does not seems to be following this as I get this error on client [amisr1@blramisr195602 ~]$ sudo puppet agent --no-daemonize --verbose --server bangvmpllda02.XXXXX.com [sudo] password for amisr1: Starting Puppet client version 3.0.1 Warning: Unable to fetch my node definition, but the agent run will continue: Warning: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /certificate_revocation_list/ca [find] at :110 Info: Retrieving plugin Error: /File[/var/lib/puppet/lib]: Failed to generate additional resources using 'eval_generate: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /file_metadata/plugins [search] at :110 Error: /File[/var/lib/puppet/lib]: Could not evaluate: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /file_metadata/plugins [find] at :110 Could not retrieve file metadata for puppet://devops.XXXXX.com/plugins: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /file_metadata/plugins [find] at :110 Error: Could not retrieve catalog from remote server: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /catalog/blramisr195602.XXXXX.com [find] at :110 Using cached catalog Error: Could not retrieve catalog; skipping run Error: Could not send report: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /report/blramisr195602.XXXXX.com [save] at :110 and the server logs show XX.XXX.XX.XX - - [10/Dec/2012:14:46:52 +0530] "GET /production/certificate_revocation_list/ca? HTTP/1.1" 403 102 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:52 +0530] "GET /production/file_metadatas/plugins?links=manage&recurse=true&&ignore=---+%0A++-+%22.svn%22%0A++-+CVS%0A++-+%22.git%22&checksum_type=md5 HTTP/1.1" 403 95 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:52 +0530] "GET /production/file_metadata/plugins? HTTP/1.1" 403 93 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:53 +0530] "POST /production/catalog/blramisr195602.XXXXX.com HTTP/1.1" 403 106 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:53 +0530] "PUT /production/report/blramisr195602.XXXXX.com HTTP/1.1" 403 105 "-" "Ruby" thefile server conf file is as follows (and goin by what they say on puppet site, It is better to regulate access in auth.conf for reaching file server and then allow file server to server all) [files] path /apps/puppet/files allow * [private] path /apps/puppet/private/%H allow * [modules] allow * I am using server and client version 3 Nginx has been compiled using the following options nginx version: nginx/1.3.9 built by gcc 4.4.6 20120305 (Red Hat 4.4.6-4) (GCC) TLS SNI support enabled configure arguments: --prefix=/apps/nginx --conf-path=/apps/nginx/nginx.conf --pid-path=/apps/nginx/run/nginx.pid --error-log-path=/apps/nginx/logs/error.log --http-log-path=/apps/nginx/logs/access.log --with-http_ssl_module --with-http_gzip_static_module --add-module=/usr/lib/ruby/gems/1.8/gems/passenger-3.0.18/ext/nginx --add-module=/apps/Downloads/nginx/nginx-auth-ldap-master/ and the standard nginx puppet master conf server { ssl on; listen 8140 ssl; server_name _; passenger_enabled on; passenger_set_cgi_param HTTP_X_CLIENT_DN $ssl_client_s_dn; passenger_set_cgi_param HTTP_X_CLIENT_VERIFY $ssl_client_verify; passenger_min_instances 5; access_log logs/puppet_access.log; error_log logs/puppet_error.log; root /apps/nginx/html/rack/public; ssl_certificate /var/lib/puppet/ssl/certs/bangvmpllda02.XXXXXX.com.pem; ssl_certificate_key /var/lib/puppet/ssl/private_keys/bangvmpllda02.XXXXXX.com.pem; ssl_crl /var/lib/puppet/ssl/ca/ca_crl.pem; ssl_client_certificate /var/lib/puppet/ssl/certs/ca.pem; ssl_ciphers SSLv2:-LOW:-EXPORT:RC4+RSA; ssl_prefer_server_ciphers on; ssl_verify_client optional; ssl_verify_depth 1; ssl_session_cache shared:SSL:128m; ssl_session_timeout 5m; } Puppet is picking up the correct settings from the files mentioned because config print command points to /etc/puppet [amisr1@bangvmpllDA02 puppet]$ sudo puppet config print | grep conf async_storeconfigs = false authconfig = /etc/puppet/namespaceauth.conf autosign = /etc/puppet/autosign.conf catalog_cache_terminus = store_configs confdir = /etc/puppet config = /etc/puppet/puppet.conf config_file_name = puppet.conf config_version = "" configprint = all configtimeout = 120 dblocation = /var/lib/puppet/state/clientconfigs.sqlite3 deviceconfig = /etc/puppet/device.conf fileserverconfig = /etc/puppet/fileserver.conf genconfig = false hiera_config = /etc/puppet/hiera.yaml localconfig = /var/lib/puppet/state/localconfig name = config rest_authconfig = /etc/puppet/auth.conf storeconfigs = true storeconfigs_backend = puppetdb tagmap = /etc/puppet/tagmail.conf thin_storeconfigs = false I checked the firewall rules on this VM; 80, 443, 8140, 3000 are allowed. Do I still have to tweak any specifics to auth.conf for getting this to work? Update I added verbose logging to the puppet master and restarted nginx; here's the additional info I see in logs Mon Dec 10 18:19:15 +0530 2012 Puppet (err): Could not resolve 10.209.47.31: no name for 10.209.47.31 Mon Dec 10 18:19:15 +0530 2012 access[/] (info): defaulting to no access for 10.209.47.31 Mon Dec 10 18:19:15 +0530 2012 Puppet (warning): Denying access: Forbidden request: 10.209.47.31(10.209.47.31) access to /file_metadata/plugins [find] at :111 Mon Dec 10 18:19:15 +0530 2012 Puppet (err): Forbidden request: 10.209.47.31(10.209.47.31) access to /file_metadata/plugins [find] at :111 10.209.47.31 - - [10/Dec/2012:18:19:15 +0530] "GET /production/file_metadata/plugins? HTTP/1.1" 403 93 "-" "Ruby" On the agent machine facter fqdn and hostname both return a fully qualified host name [amisr1@blramisr195602 ~]$ sudo facter fqdn blramisr195602.XXXXXXX.com I then updated the agent configuration to add dns_alt_names = 10.209.47.31 cleaned all certificates on master and agent and regenerated the certificates and signed them on master using the option --allow-dns-alt-names [amisr1@bangvmpllDA02 ~]$ sudo puppet cert sign blramisr195602.XXXXXX.com Error: CSR 'blramisr195602.XXXXXX.com' contains subject alternative names (DNS:10.209.47.31, DNS:blramisr195602.XXXXXX.com), which are disallowed. Use `puppet cert --allow-dns-alt-names sign blramisr195602.XXXXXX.com` to sign this request. [amisr1@bangvmpllDA02 ~]$ sudo puppet cert --allow-dns-alt-names sign blramisr195602.XXXXXX.com Signed certificate request for blramisr195602.XXXXXX.com Removing file Puppet::SSL::CertificateRequest blramisr195602.XXXXXX.com at '/var/lib/puppet/ssl/ca/requests/blramisr195602.XXXXXX.com.pem' however, that doesn't help either; I get same errors as before. Not sure why in the logs it shows comparing access rules by IP and not hostname. Is there any Nginx configuration to change this behavior?

    Read the article

  • How to get better video quality in Lync?

    - by sinned
    I want to use a ConferenceCam from Logitech to stream talks live via Lync. When I view the RAW webcam image via VLC, the quality is very good (but the latency is high because of buffering). However, when I stream it using Lync, the video gets blurry. Is there a way to ensure QoS in Lync or otherwise improve the video quality to (near-)native? I would rather have some dropped frames than a lower resolution where I can't read the slides. In my setup, I use Lync with an Office365-E3 contract, so I have no Lync-Server in my network. I thought about replacing Lync completely with VLC, but I first want to try Lync because VLC will probably cause firewall issues. Also, I haven't looked up the VLC parameters for less buffering, faster encoding, a bit lower resolution (natively it's more than HD) and streaming.

    Read the article

  • Virtual Machine Manager 2012 CPU Average

    - by Grant
    What exactly is the CPU Average field in VMM 2012 showing me? I'm running Server 2008 R2 with VMM 2012. My server has 2x16 core CPUs installed. An example virtual machine has 4 virtual processors, and shows 20% CPU usage. Is that: 20% of the entire system's available CPU power? 20% of 4 of the 32 core's CPU power? 20% of one core's CPU? (in which case it could go as high as 400%) Something else entirely? How can I tell how much of the entire system's CPU power is being used (all 32 cores)? Edit: Well, I can tell for sure it's not 20% of the entire system's CPU power - since the entire server's CPU averages add up to well over 100% right now.

    Read the article

  • MacBook can't use internet, but nslookup and ping both work

    - by Joel Coehoorn
    I have a user with a new high-end MacBook Pro that can't use the internet. He can connect to either our wired or wireless network and do things like browse file shares, but can get no further. When I brought the machine in for testing, I found that I could do an nslookup just fine, and I'm able to ping addresses returned by nslookup just fine. I'm even able to bring up web pages by entering the IP address into the address bar directly. However, when I try to ping the domain name rather than the IP address, it just sits there. So apparently I can either do name resolution or communicate with an address, but not both at the same time. Again, these symptoms occur on both the wired and wireless network. Other machines on our network, including a few other Macs, don't have this issue. Any ideas?

    Read the article

  • Using airport extreme base station as a wireless G to wireless N bridge

    - by micmcg
    Due to unfortunate placement of various wall outlets in my new apartment, my adsl 2 modem + router is not able to be placed next to my airport extreme base station (AEBS). I'd prefer not to run cat5 cable between them, but I still want to use the wireless N on the AEBS, because it will have a high speed NAS plugged in via gigabit ethernet so I want to maximise the wireless transfer speed to my 27" iMac. Is it possible to use one of the radios in the airport extreme to bridge the wireless G network with the router, and the wireless N radio to broadcast normally? The only traffic going over the G network should be internet (24mbps ADSL2) so it should never get saturated. Thanks

    Read the article

  • Memcached server: Is it a good practice to point two server urls to the same server?

    - by Niro
    I have a system where there are connections to a memcache server from several different files and servers. I would like to stay with one server but keep the option of increasing the number of memcache servers (for periods of of high traffic). My idea is to tell memcache there are two servers, while the two urls will point (by DNS) to a single server. In the future if I want I can add a server and change DNS without changing the code in many places. Is this a good practice? Is there a performance cost to the fact that there are two server connections but they both point to the same server? Any other idea how to achive instant expeandability of memcache capacity without need to change the code and deploy ?

    Read the article

  • How to customize Windows Failover clustering to trigger on failure of custom window service?

    - by melaos
    i'm a total newbie on windows failover clustering. and what i want to do now is to setup the FC (failover clustering) on two win 2008 R2 server. and right now i have my custom window service running on both machine. But they cannot run concurrently as it will mess up the DB, thus i just want one to be available at all time (high availability). so i'm wondering if there's any way to set the failover policy to include this custom window service that i've installed on these machines so that if this service goes down or die, then it will automatically trigger the failover to the second node. is this possible? or must it be done programatically? and if so what is the best way? thanks ~m

    Read the article

  • Postfix performance

    - by Brian G
    Running postfix on ubuntu, sending alot of mail ( ~ 1 million messages ) per day. loads are extremly high but not much in terms of cpu and memory load. Anyone in a similiar situation and know how to remove the bottleneck? All mail on this server is outbound. I would have to assume the bottleneck is disk. Just an update, here is what iostat looks like: avg-cpu: %user %nice %system %iowait %steal %idle 0.00 0.00 0.12 99.88 0.00 0.00 Device: rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await svctm %util sda 0.00 12.38 0.00 2.48 0.00 118.81 48.00 0.00 0.00 0.00 0.00 sdb 1.49 22.28 72.28 42.57 629.70 1041.58 14.55 135.56 834.31 8.71 100.00 Are these numbers in line with the performance you would expect from a single disk? sdb is dedicated to postfix. I think it is queue shuffling, from incoming-active-deferred More details from questions: Server: Quad core Xeon(R) CPU E5405 @ 2.00GH with 4 GB ram Load average: 464.88, 489.11, 483.91, 4 cores. but the memory utilization and cpu is minimal Postfix instances between 16 - 32

    Read the article

  • PHP cgi locks up and times out.

    - by Oli
    I've got a dozen wordpress sites hosted on a nginx/php-cgi setup. After a variable amount of time (usually not that long, and occasionally very fast) PHP locks up and after 2 minutes (the timeout I set in nginx), it get a 504 timeout. I've tried everything I can think of. I've been using an init script to launch php-cgi but I compiled out php-fpm and tried that for a day with various configurations with the same results. I've tried a low number of PHP_FCGI_CHILDREN. I've tried as high as my RAM will let me. I've tried various settings for PHP_FCGI_MAX_REQUESTS. xcache seemed to exacerbate the issue, so I removed it. The server is a VPS but it has over a gig of ram dedicated to it. All suggestions are welcome at this juncture because I'm desperate.

    Read the article

  • Server 2008 R2 How to Change Windows 7 Basic Theme Color

    - by Wes Sayeed
    We're deploying thin clients connecting to a terminal server farm. The computers have high visibility to the public and I would like them to at least look presentable and not like something out of 1995. So I installed the Desktop Experience feature and enabled the Theme service. The server will not support Aero because it has no 3D graphics, but we can enable the Windows 7 Basic theme, which has the Aero look without the 3D effects. The problem with that theme is that you can select any window color you want, as long as it's baby boy blue. Is there a way to make those windows another color? The window color controls do nothing.

    Read the article

  • Temporarily Utilizing 304 Header on Apache for Crawlers

    - by Volomike
    I have a client who has a hosting arrangement with 400 customer sites all hosted through SuPHP in CGI mode on Apache. The sysop is now gone and the client is calling on me for rolling out a new PHP thing. Trouble is -- server load is very high right now and we have found that it's due to the crawlers. We had one customer in particular who complained of slow websites, and we engaged a 304 header plugin in his site against most crawlers, and his site perked right up. We'd like to lower that load by issuing a global 304 header to all the crawlers, letting human visitors through. I have a long list of user agent keywords to trap for. What's the best way to temporarily engage that global 304 header, while allowing human visitors to get right on through? I mean, I could roll out 400 .htaccess file changes, but it would be ideal to make this change in like one central Apache config and then it automatically affect all the sites at once.

    Read the article

  • Mysqldump causes "Too many connections"

    - by vbachev
    A scheduled backup using mysqldump on one of our databases is causing Too many connections. The database is of both InnoDB and MyISAM tables with size of around 500Mb. The Too many connections appears for about 2-3 minutes We understand that mysqldump locks the tables and causes all other queries and connections to pile up and jam the mysql server. We need frequent backups and we cannot afford server downtime or putting websites in maintenance mode while doing it. Our websites are global and traffic is high all the time so its hard to find a moment for backups. How can we avoid downtime during backups?Is there maybe a way to use mysqldump in way that it will not lock all tables at the same time?Is there an alternative to backing up with mysqldump?

    Read the article

  • Linux: Why to change inode size?

    - by FractalizeR
    Hello. Tune2fs allows to change inode size from default 128 bytes to almost anything (but it should be power of two). What can the the reasons of changing default inode size? Here http://kbase.redhat.com/faq/docs/DOC-7433 is written, that this can be done to be able to store ACL attributes inside inodes. What else can be stored inside inode? May be some other attributes? Or anything else? Is there any reason to increase inode size on modern high-capacity drives (2TB and more)?

    Read the article

  • DSL connection dropping when starting game

    - by bootoo
    New connection set up last week - have called tech support, signal fine, sending out tech guy tomorrow - however i cant make sense of this using Vista, 2wire connection 6mb ATT conn, wireless, i can browse website, stream video from my PC wirelessly to my xbox, watch Hulu in high res, connection seems fine I load up World of Warcraft, i select Enter World and without fail my 2wire modem 'DSL' light goes red, my internet light goes out, im no longer connected, then after 10 or 15 seconds it works again, but game has kicked me Noticed the same thing on opening or closing Utorrent as well No other phones/devices plugged into the phone jacks in the apartment - why would joining a game cause my DSL to drop? i can only guess it throws a bunch of information to the server when i hit 'enter world' and that causes an issue that shuts my connection down or my router hates me - all help appreciated

    Read the article

  • 64bit or 32bit Linux system?

    - by Milan Babuškov
    I have a server that has 4GB of RAM. On it, I have installation of 32bit Slackware Linux 12.1. Of course, it is not using all of 4GB of RAM. I'd soon like to increase the RAM to 8GB, and am looking for a way for the system to use it. The system is used as a database server and is under high load during the day. AFAICT, I have two options: stay with 32bit and rebuild the kernel and lose some performance. Or go with 64bit and reinstall everything. Looking at 64bit versions of Slackware, I could run -current or Slamd64. Now, on to the questions: Should I stay with 32bit or go with 64bit? If I go 64bit, should I use -current or Slamd64? P.S. I hope to get answers from someone actually using any of these configurations in production, not just copy/paste something I could find myself via Google.

    Read the article

  • Configuring jdbc-pool (tomcat 7)

    - by john
    i'm having some problems with tomcat 7 for configuring jdbc-pool : i`ve tried to follow this example: http://www.tomcatexpert.com/blog/2010/04/01/configuring-jdbc-pool-high-concurrency so i have: conf/server.xml <GlobalNamingResources> <Resource type="javax.sql.DataSource" name="jdbc/DB" factory="org.apache.tomcat.jdbc.pool.DataSourceFactory" driverClassName="com.mysql.jdbc.Driver" url="jdbc:mysql://localhost:3306/mydb" username="user" password="password" /> </GlobalNamingResources> conf/context.xml <Context> <ResourceLink type="javax.sql.DataSource" name="jdbc/LocalDB" global="jdbc/DB" /> <Context> and when i try to do this: Context initContext = new InitialContext(); Context envContext = (Context)initContext.lookup("java:/comp/env"); DataSource datasource = (DataSource)envContext.lookup("jdbc/LocalDB"); Connection con = datasource.getConnection(); i keep getting this error: javax.naming.NameNotFoundException: Name jdbc is not bound in this Context at org.apache.naming.NamingContext.lookup(NamingContext.java:803) at org.apache.naming.NamingContext.lookup(NamingContext.java:159) pls help tnx

    Read the article

  • Disk Redundancy across different server

    - by Mascarpone
    I have 3 servers, all with the same specs: Intel CPU 8 GB RAM Linux or BSD Single 2TB desktop SATA with more than 10K Hours of operation, with only less than 300 GB Used My provider cannot install a second hard drive, but can guarantee me that the drive will be replaced immediately in case of failure, with another equally crappy drive. The likelihood of drive failure is high, and since I can't use RAID, I was thinking about keeping a back up of each machine on all the other machines, so that there are always 2 copies on 2 different drives, plus the original. I would synchronize the drives every hour, with rsync, to guarantee some sort of redundancy, since bandwidth inside the DC is free, so it would be much cheaper than offsite backup. (A daily offiste backup is kept anyhow). What do you think? Any suggestion?

    Read the article

  • Can't enable apache mod on emerge

    - by ranisalt
    I want to add mod_proxy and mod_proxy_http to Apache server on my Gentoo, but apparently some file with high priority on the system is disabling the mods and preventing me to install. I am currently editing /usr/portage/profiles/base/make.defaults file, but it gets updated (and changes lost) every time I update the system. I have to edit it every time I update the system/reinstall Apache. Besides that, I have already added dependencies to the /etc/portage/package.use file: www-servers/apache proxy proxy_http What other files do I have do change or should check flags so I can enable proxy and do not have to edit files again every time?

    Read the article

< Previous Page | 230 231 232 233 234 235 236 237 238 239 240 241  | Next Page >