Search Results

Search found 15209 results on 609 pages for 'configuration'.

Page 322/609 | < Previous Page | 318 319 320 321 322 323 324 325 326 327 328 329  | Next Page >

  • How to configure Linux to open files by extension?

    - by Gregory MOUSSAT
    The various Linux's desktops open files according to their mime type. This is a very nice feature but I also need to open them by extension (as with Windows). For instance, I want to open every xxxxx.vnc files with a specific program when I double-click on them. I use xfce but I don't think it differs from Gnome or KDE because all of them use the same configuration files (defaults.list and mimeapps.list). If possible the settings are user specific, not system wide. I've found some very poor informations about that, and all are system wide, so may be wiped out by some updates.

    Read the article

  • ClearOS - how to avoid getting stuck at a fsck message at boot?

    - by Scott Szretter
    I have had this happen a couple times - I have a ClearOS Enterprise 5.2 box, and due to a power outage or similar, it ends up showing an error at boot and saying that fsck needs to be run (I think it said with (or without?) the -a parameter). The problem is, I need this box to be headless, at a remote location (miles away)! SO, I need to come up with a solution on how to either have it automatically repair itself, without someone to be present with a monitor and keyboard. Another possibility is to simply avoid the issue all together - maybe there is something that can be changed so it's very unlikely to happen (I am unable to avoid the power outage of course - at least not practically). Finally, maybe it can be boot off a read only media (cd) or file system or similar? At least the base OS, so that it would always at least boot with enough configuration that might allow remote access, or basic connectivity?

    Read the article

  • emca fails with "Database instance is unavailable" though available

    - by Giri Mandalika
    The following example shows the symptoms of failure, and the exact error message. $ emca -repos create ... Password for SYSMAN user: Do you wish to continue? [yes(Y)/no(N)]: Y Nov 19, 2012 10:33:42 AM oracle.sysman.emcp.DatabaseChecks \ checkDbAvailabilityImpl WARNING: ORA-01034: ORACLE not available Nov 19, 2012 10:33:42 AM oracle.sysman.emcp.DatabaseChecks \ throwDBUnavailableException SEVERE: Database instance is unavailable. Fix the ORA error thrown and run EM Configuration Assistant again. Some of the possible reasons may be : 1) Database may not be up. 2) Database is started setting environment variable ORACLE_HOME with trailing '/'. Reset ORACLE_HOME and bounce the database. For eg. Database is started setting environment variable ORACLE_HOME=/scratch/db/ . Reset ORACLE_HOME=/scratch/db and bounce the database. Fix: Ensure that the ORACLE_HOME is pointing to the right location in $ORACLE_HOME/bin/emca file. Rather than installing from scratch, if ORACLE_HOME was copied over from another location, likely it results in wrong location for ORACLE_HOME in several Enterprise Manager (EM) specific scripts and files. It usually happens when the directory structure on the target machine is not identical to the structure on the original/source machine, including the top level directory location where Oracle RDBMS was installed properly using the installer.

    Read the article

  • vsftpd: chroot_local_user causes GNU/TLS-error

    - by akrosikam
    Distro: Ubuntu 12.04.2 Server 32-bit Server client: vsftpd 2.3.5 (from default "main" repository) Problem: Since upgrading from Ubuntu 10.04 to Ubuntu 12.04 (nothing changed on client-side), vsftp has refused to make chroot-jails with the "chroot_local_user" directive on FTP(e/i)S-connections. Here's my vsftpd.conf: anonymous_enable=NO local_enable=YES write_enable=YES local_umask=022 dirmessage_enable=YES xferlog_enable=YES xferlog_std_format=YES ftpd_banner=How are you gentlemen. listen=YES pam_service_name=vsftpd userlist_enable=YES userlist_deny=NO tcp_wrappers=YES connect_from_port_20=YES ftp_data_port=20 listen_port=21 pasv_enable=YES pasv_promiscuous=NO pasv_min_port=4242 pasv_max_port=4252 pasv_addr_resolve=YES pasv_address=your.domain.com ssl_enable=YES allow_anon_ssl=NO force_local_logins_ssl=YES force_local_data_ssl=YES ssl_tlsv1=YES ssl_sslv2=NO ssl_sslv3=NO rsa_cert_file=/home/maw/ssl_ftp_test/vsftpd.pem rsa_private_key_file=/home/maw/ssl_ftp_test/vsftpd.pem debug_ssl=YES log_ftp_protocol=YES ssl_ciphers=HIGH chroot_local_user=NO How to reproduce: Have a working SSL/TLS-secured vsftpd-configuration (I suggest similar to the one above) ready. Try to connect with an FTP user client and upload some files. With my setup, the above listed config works well at this point. Edit /etc/vsftpd.conf and set chroot_local_user= to YES. Make sure that chroot_list_enable= and/or chroot_list_file= are not set. Comment them out if they are. Save and exit. Run sudo restart vsftpd (or sudo service vsftpd restart if you like) in a terminal. Try to connect with an FTP user client. You should see a message more or less like this: GnuTLS error -15: An unexpected TLS packet was received. This is an issue for me, as I do not want FTP-sessions to be able to list files outside the user's home folder. I have checked with several client-side apps, and I get the same results with every one of them. Filezilla is not so good regarding cipher methods nowadays, but as I am able to make an FTP(e)s-connection over TLS (as long as chroot'ing is disabled and ssl_ciphers is set to HIGH) I have a feeling ciphers are not the issue this time, and that I won't find the answer by tweaking configs on the client side. My vsftpd.log stays empty, even though debug_ssl and log_ftp_protocol are enabled, so no info there either.

    Read the article

  • Should a webserver in the DMZ be allowed to access MSSQL in the LAN?

    - by Allen
    This should be a very basic question and I tried to research it and couldn't find a solid answer. Say you have a web server in the DMZ and a MSSQL server in the LAN. IMO, and what I've always assumed to be correct, is that the web server in the DMZ should be able to access the MSSQL server in the LAN (maybe you'd have to open a port in the firewall, that'd be ok IMO). Our networking guys are now telling us that we can't have any access to the MSSQL server in the LAN from the DMZ. They say that anything in the DMZ should only be accessible FROM the LAN (and web), and that the DMZ should not have access TO the LAN, just as the web does not have access to the LAN. So my question is, who is right? Should the DMZ have access to/from the LAN? Or, should access to the LAN from the DMZ be strictly forbidden. All this assumes a typical DMZ configuration.

    Read the article

  • /lib/udev/net.agent causing high CPU usage

    - by Antoine Benkemoun
    We have a number of Soekris boxes running Debian Squeeze. They were installed through an automated process consisting of using deboostrap and copying it unto a Compact Flash card. We use puppet to manage the configuration of all these boxes. Before Debian Squeeze, they were running Voyage Linux which is just a "lighter" version of Debian. Since we have switched, we're seeing the /lib/udev/net.agent process take up an aweful lot of CPU. We have so far been unable to find any clue as to what this really does and why it's taking up some much CPU time. In htop we see the following : We are seeing absolutly no syslog messages related to this process so we're a bit lost... So, I am looking for pointers as to what this process does in general and what could be the potential cause of such CPU usage.

    Read the article

  • Redmine on Redhat/CentOS 5 Without using virtual hosts

    - by flyclassic
    I've have followed all the steps to install Redmine on CentOS 5, except for the Apache part: http://www.redmine.org/projects/redmine/wiki/HowTo_install_Redmine_on_CentOS_5 I do not want to configure a virtualhost as we are not using virtual hosts. Can I configure Redmine to run with http://hostname/redmine? Apparently it doesn't work for my case. Redmine was extracted in to the webserver document root /var/www/html/ called /var/www/html/redmine What I did was added a redmine.conf to /etc/httpd/conf.d/ with the following configuration and restarted the server: <Location "/redmine"> Options Indexes ExecCGI FollowSymLinks -MultiViews Order allow,deny Allow from all AllowOverride all PassengerEnabled On RailsBaseURI /var/www/html/redmine RailsEnv production </Location> now i got this error Further information about the error may have been written to the application's log file. Please check it in order to analyse the problem. Error message: No such file or directory - config/environment.rb Exception class: Errno::ENOENT Application root: /var/www/html Where have I gone wrong?

    Read the article

  • DELETE method not working in Apache 2.4

    - by Xavi
    I'm running Apache 2.4 locally and dealing with RESTful services authenticating through OAuth. GET, PUT and POST work fine but I can't get DELETE to work. I've tried installing WebDAV and mod_dav, overriding methods in .htaccess, tried Limits, force (enable) DELETE options in configuration and pretty much everything I've found in Google and StackExchange. Here's a copy of my .htaccess right now: <IfModule mod_rewrite.c> Header add Access-Control-Allow-Origin: * Header add Access-Control-Allow-Headers: Authorization Header add Access-Control-Allow-Headers: X-Requested-With Header add Access-Control-Request-Method: HEAD Header add Access-Control-Request-Method: GET Header add Access-Control-Request-Method: PUT Header add Access-Control-Request-Method: DELETE Header add Access-Control-Request-Method: OPTIONS Options +FollowSymlinks Options -Indexes RewriteEngine on RewriteRule ^(.*)\.* index.php [NC,L] </IfModule> Chrome's console shows: XMLHttpRequest cannot load http://dev.server.com/cars/favourite/. Method DELETE is not allowed by Access-Control-Allow-Methods. Is there anything I am missing?

    Read the article

  • Slow NFS transfer performance of small files

    - by Arie K
    I'm using Openfiler 2.3 on an HP ML370 G5, Smart Array P400, SAS disks combined using RAID 1+0. I set up an NFS share from ext3 partition using Openfiler's web based configuration, and I succeeded to mount the share from another host. Both host are connected using dedicated gigabit link. Simple benchmark using dd: $ dd if=/dev/zero of=outfile bs=1000 count=2000000 2000000+0 records in 2000000+0 records out 2000000000 bytes (2.0 GB) copied, 34.4737 s, 58.0 MB/s I see it can achieve moderate transfer speed (58.0 MB/s). But if I copy a directory containing many small files (.php and .jpg, around 1-4 kB per file) of total size ~300 MB, the cp process ends in about 10 minutes. Is NFS not suitable for small file transfer like above case? Or is there some parameters that must be adjusted?

    Read the article

  • How do I configure multiple domain names on my IIS server?

    - by Dillie-O
    We have a few websites that we are running on one instance of IIS that need to be mapped for each of their domain names. For example. Site A has the domain name coolness.com Site B has the domain name 6to8Weeks.com Site C has the domain name PhatTech.com When I look at the "Web Site Identification" section of the IIS configuration window, I notice that I can specify an IP address and port, but if I click the Advanced button, I can also configure the site based on host header values as well. How do I configure each site in IIS? Ideally I would like them to all be able to listen to port 80, so I don't have weird URLs, but I'm not sure if I do this using headers, IP addresses, both, or something else.

    Read the article

  • set virtual host on Apache2.2 and PHP 5.3

    - by Avinash
    Hi I want to set my Virtual host on Apache 2.2. So, I can access my site using my IP address and Port number. Like http://192.168.101.111:429 for one site, http://192.168.101.111:420 for other site and so on. My machine OS in Windows 7. I have tried below in my httpd.conf file. Listen 192.168.101.83:82 #chaffoteaux <Directory "Path to project folder"> AllowOverride All </Directory> <VirtualHost 192.168.101.83:82> ServerAdmin [email protected] DirectoryIndex index.html index.htm index.php index.html.var DocumentRoot "Path to project folder" #ServerName dummy-host.example.com ErrorLog logs/Zara.log #ErrorLog logs/dummy-host.example.com-error_log #CustomLog logs/dummy-host.example.com-access_log common </VirtualHost> Can you please suggest any thing missing in my configuration. Thanks in advance Avinash

    Read the article

  • How do I set up live audio streams to a DLNA compliant device?

    - by Takkat
    Is there a way to stream the live output of the soundcard from our 12.04.1 LTS amd64 desktop to a DLNA-compliant external device in our network? Selecting media content in shared directories using Rygel, miniDLNA, and uShare is always fine - but so far we completely failed to get a live audio stream to a client via DLNA. Pulseaudio claims to have a DLNA/UPnP media server that together with Rygel is supposed to do just this. But we were unable to get it running. We followed the steps outlined in live.gnome.org, this answer here, and also in another similar guide. As soon as we select the local audio device, or our GST-Launch stream in the DLNA client Rygel displays the following message and the client states it reached the end of the playlist: (rygel:7380): Rygel-WARNING **: rygel-http-request.vala:97: Invalid seek request This is how we configured GST-Launch in rygel.conf: [GstLaunch] enabled=true launch-items=mypulseaudiosink mypulseaudiosink-title=Audio on @HOSTNAME@ mypulseaudiosink-mime=audio/x-wav mypulseaudiosink-launch=pulsesrc device=<device> ! wavpackenc For <device> we tried with the default sink name, this name appended with .monitor, and in addition with upnp-sink and upnp.monitor that was created when we selected DLNA media server from paprefs. We also tried to encode using lamemp3enc with no luck. These are our pulseaudio modules: http://paste.ubuntu.com/1202913/ These are our sinks: http://paste.ubuntu.com/1202916/ Did we miss any other additional configuration needed to get this running? Are there any other alternatives for sending the audio of our soundcard as live stream to a DLNA client?

    Read the article

  • Is this a common/bug on this PPPoE setting for Cisco ASA 5505?

    - by DCAlliances
    We have to change the way the firewall setup due to we've changed the internet provider. The way we setup we have ADSL modem and we have to do a full bridge mode and the firewall configuration has to change from Static IP to Use PPPoE option on Outside interface with PPPoE username and password, CHAP authentication, WAN IP and WAN subnet mask. [See the attachment] It's been working ok with the use of PPPoE option however the issue is that if we un-plug the power cable of the firewall. This "Outside" interface turned to blank - NO WAN IP, PPPoE username and password. So basically, we have to retype this information again. Is this common or a bug? Any ideas? Thanks

    Read the article

  • Nginx & Passenger - failed (11: Resource temporarily unavailable) while connecting to upstream

    - by Toby Hede
    I have an Nginx and Passenger setup that is proving problematic. At relatively low loads the server seems to get backed up and start churning results like this into the error.log: connect() to unix:/passenger_helper_server failed (11: Resource temporarily unavailable) while connecting to upstream My passenger setup is: passenger_min_instances 2; passenger_pool_idle_time 1200; passenger_max_pool_size 20; I have done some digging, and it looks like the CPU gets pegged. Memory usage seems fine passenger_memory_stats shows at most about 700MB being used, but CPU approaches 100%. is this enough to cause this type of error? Should i bring the pool size down? Are there other configuration settings I should be looking at? Any help appreciated Other pertinent information: Amazon EC2 Small Instance Ubuntu 10.10 Nginx (latest stable) Passenger (latest stable) Rails 3.0.4

    Read the article

  • How to enable extended logging for classic asp on IIS7 on Windows 2008 R2

    - by Neil Trodden
    I had to deploy an application that was not written by me onto the above configuration. It is a rather bizarre hybrid of asp.net and classic asp and it's the classic asp that is proving troublesome. The client is having problems with 500 Internal Server Errors appearing and I can see some of these in the logs but I only get the error code and the page name but little else. What I would like to see is the actual error message to at least give me an idea what is going on (or not going on, depending on your point of view) I don't want to display errors in the browser as I don't know the code well enough and this could (for all I know) display some crazy code where the db password is hard-coded into the site.

    Read the article

  • PFSense CSR Generation

    - by ErnieTheGeek
    I'm trying to figure out how to generate a CSR so I can generate and install a SSL cert. Here's a LINK to what I've what tried. Granted that post was for m0n0wall, but I figured openssl is openssl. Heres where I get stuck. When I run this: /usr/bin/openssl req -new -key mykey.key -out mycsr.csr -config /usr/local/ssl/openssl.cnf I get this: error on line -1 of /usr/local/ssl/openssl.cnf 54934:error:02001002:system library:fopen:No such file or directory:/usr/src/secure/lib/libcrypto/../../../crypto/openssl /crypto/bio/bss_file.c:122:fopen('/usr/local/ssl/openssl.cnf','rb') 54934:error:2006D080:BIO routines:BIO_new_file:no such file:/usr/src/secure/lib/libcrypto/../../../crypto/openssl/crypto/ bio/bss_file.c:125: 54934:error:0E078072:configuration file routines:DEF_LOAD:no such file:/usr/src/secure/lib/libcrypto/../../../crypto/open ssl/crypto/conf/conf_def.c:197:

    Read the article

  • Fedora12, XP and connection sharing via iptables

    - by Paul L
    Just a quick question ( I Hope ) To find out if what I'm trying is even possible. I am trying to share internet connection with Fedora12 as default gateway and XP machine hooked up via NIC using iptables commands as shown in Mark Sobell's book 'A Practical Guide To Fedora And Red Hat Enterprise Linux' These are the commands as placed in /etc/rc.local iptables -A FORWARD -i eth1 -o eth0 -m state --state ESTABLISHED,RELATED -j ACCEPT iptables -A FORWARD -i eth0 -o eth1 -j ACCEPT iptables -A FORWARD -j LOG iptables -t NAT -A POSTROUTING -o eth1 -j MASQUERADE I did flip the in and out parameters to match my NIC configuration ( as opposed to example from book ) but other than that followed example. One thing to note is that Sobell did not mention whether this should work with mix of Linux and XP. One other note ( maybe meaningless ) is that I do have samba working between the two machines. Thanks for any insights anyone might have. PL

    Read the article

  • Trouble installing php memcache extension

    - by 2020vert
    I'm trying to install memcache on MAMP but I get the warning below, and when I continue it seems to complete properly. I add the line extension=memcache.so to the php.ini and restart MAMP but phpinfo() doesn't list the memcache extension. $ ./pecl install memcache downloading memcache-2.2.5.tgz ... Starting to download memcache-2.2.5.tgz (35,981 bytes) ..........done: 35,981 bytes 11 source files, building WARNING: php_bin /Applications/MAMP/bin/php5/bin/php appears to have a suffix 5/bin/php, but config variable php_suffix does not match running: phpize Configuring for: PHP Api Version: 20041225 Zend Module Api No: 20060613 Zend Extension Api No: 220060519 Enable memcache session handler support? [yes] : yes ... Build process completed successfully Installing '/Applications/MAMP/bin/php5/lib/php/extensions/no-debug-non-zts-20060613/memcache.so' install ok: channel://pecl.php.net/memcache-2.2.5 configuration option "php_ini" is not set to php.ini location You should add "extension=memcache.so" to php.ini

    Read the article

  • BizTalk 2009 - The Community ODBC Adapter: Installation

    - by Stuart Brierley
    I have previsouly detailed the installation of MySQL, the configuration of MySQL and the installation of the ODBC Data Connector for MySQL.  The reason I needed to install and configure these servers was to provide a test environment for a BizTalk Server 2009 solution I am working on where BizTalk will be querying and populating a MySQL database. To do this I then needed to install and add the Community ODBC adapter from Two Connect: "The Community BizTalk Adapter for ODBC is based on the code that was first made available on GotDotNet a few years ago. TwoConnect has refreshed this code, added an installer, and tested it against the latest BizTalk editions. We are releasing the updates back to the BizTalk developer, user and partner community as part of our ongoing community intitiatives. This is the second adapter package that TwoConnect makes available to the community after the very succesful release of the BizTalk WSE 3 adapter a couple of years ago. This adapter is useful in all ODBC based integration scenarios. The following are the new features added and fixes made to the old code base on GotDotNet." Detailed below are the installation instructions for this adapter.  Downloading and running the installer will load up the splash screen. Next you need to select the installation location for the adapter. You then need to confirm the installation following which you will be shown the installation progress. Assuming all has gone well you should see the installation complete screen. Once the installation has completed successfully you will then need to add the adapter to your BizTalk Server.  To do this open the BizTalk Administration console, expand the Platform Settings and right click on Adapters then select New\Adapter. You should then be able to select the ODBC adapter and choose the display name for the adapter. This adapter will then be shown in the BizTalk Administration console. Next I will be looking at using the ODBC Adapter when: Generating schemas Creating a receive port Creating a send port

    Read the article

  • Secondary DHCP server won't start on Centos 6.2

    - by Slowjoe
    I'm trying to create a backup DHCP server. Server times are in sync. Primary server starts fine. Secondary server won't start. Error from /var/log/messages is: Sep 15 14:47:45 stream dhcpd: Copyright 2004-2010 Internet Systems Consortium. Sep 15 14:47:45 stream dhcpd: All rights reserved. Sep 15 14:47:45 stream dhcpd: For info, please visit https://www.isc.org/software/dhcp/ Sep 15 14:47:45 stream dhcpd: /etc/dhcp/dhcpd.conf line 25: invalid statement in peer declaration Sep 15 14:47:45 stream dhcpd: #011max-response-default Sep 15 14:47:45 stream dhcpd: ^ Sep 15 14:47:45 stream dhcpd: /etc/dhcp/dhcpd.conf line 41: failover peer dhcp-failover: not found Sep 15 14:47:45 stream dhcpd: failover peer "dhcp-failover" Sep 15 14:47:45 stream dhcpd: ^ Sep 15 14:47:45 stream dhcpd: /etc/dhcp/dhcpd.conf line 49: failover peer dhcp-failover: not found Sep 15 14:47:45 stream dhcpd: failover peer "dhcp-failover" Sep 15 14:47:45 stream dhcpd: ^ Sep 15 14:47:45 stream dhcpd: WARNING: Host declarations are global. They are not limited to the scope you declared them in. Sep 15 14:47:45 stream dhcpd: /etc/dhcp/dhcpd.conf line 70: failover peer dhcp-failover: not found Sep 15 14:47:45 stream dhcpd: failover peer "dhcp-failover" Sep 15 14:47:45 stream dhcpd: ^ Sep 15 14:47:45 stream dhcpd: /etc/dhcp/dhcpd.conf line 78: failover peer dhcp-failover: not found Sep 15 14:47:45 stream dhcpd: failover peer "dhcp-failover" Sep 15 14:47:45 stream dhcpd: ^ Sep 15 14:47:45 stream dhcpd: Configuration file errors encountered -- exiting Sep 15 14:47:45 stream dhcpd: Sep 15 14:47:45 stream dhcpd: This version of ISC DHCP is based on the release available Sep 15 14:47:45 stream dhcpd: on ftp.isc.org. Features have been added and other changes Sep 15 14:47:45 stream dhcpd: have been made to the base software release in order to make Sep 15 14:47:45 stream dhcpd: it work better with this distribution. Sep 15 14:47:45 stream dhcpd: Sep 15 14:47:45 stream dhcpd: Please report for this software via the CentOS Bugs Database: Sep 15 14:47:45 stream dhcpd: http://bugs.centos.org/ Sep 15 14:47:45 stream dhcpd: Sep 15 14:47:45 stream dhcpd: exiting. Config file contents: # DHCP Server Configuration file. # see /usr/share/doc/dhcp*/dhcpd.conf.sample # see 'man 5 dhcpd.conf' # option domain-name "eng.foo.com"; option domain-name-servers ns0.eng.foo.com, ns1.eng.foo.com; option ntp-servers ntp.eng.foo.com; #option time-servers ntp.eng.foo.com; default-lease-time 3600; max-lease-time 7200; authoritative; log-facility local7; failover peer "dhcp-failover" { secondary; address 10.0.1.70; port 647; peer address 10.0.1.11; peer port 647; max-response-default 30; max-unacked-updates 10; load balance max seconds 3; } # # Management subnet # subnet 10.0.0.0 netmask 255.255.255.0 { option subnet-mask 255.255.255.0; option broadcast-address 10.0.0.255; option routers 10.0.0.1; option domain-search "eng.foo.com", "foo.com"; # Unknown clients get this pool pool { failover peer "dhcp-failover"; max-lease-time 300; range 10.0.0.240 10.0.0.249; allow unknown-clients; } # Known clients get this pool pool { failover peer "dhcp-failover"; max-lease-time 28800; range 10.0.0.150 10.0.0.199; deny unknown-clients; } include "/etc/dhcp/dhcpd.conf-engmgmt"; } # # Data subnet # subnet 10.0.1.0 netmask 255.255.255.0 { option subnet-mask 255.255.255.0; option broadcast-address 10.0.1.255; option routers 10.0.1.1; option domain-search "eng.foo.com", "foo.com"; # Unknown clients get this pool pool { failover peer "dhcp-failover"; max-lease-time 300; range 10.0.1.240 10.0.1.249; allow unknown-clients; } # Known clients get this pool pool { failover peer "dhcp-failover"; max-lease-time 28800; range 10.0.1.150 10.0.1.199; deny unknown-clients; } # For centos network installs if substring (option vendor-class-identifier, 0, 8) = "anaconda" { filename "/autohome/distro/ks/"; next-server eng-data.eng.foo.com; } # For PXE network installs if substring (option vendor-class-identifier, 0, 9) = "PXEClient" { filename "pxelinux.0"; next-server eng-data.eng.foo.com; } # For KVM PXE network installs if substring (option vendor-class-identifier, 0, 9) = "Etherboot" { filename "pxelinux.0"; next-server eng-data.eng.foo.com; } include "/etc/dhcp/dhcpd.conf-engdata"; }

    Read the article

  • What are the options for hosting a small Plone site?

    - by Tina Russell
    I’ve developed a portfolio website for myself using Plone 4, and I’m looking for someplace to host it. Most Plone hosting services seem to focus on large, corporate deployments, but I need something that I can afford on a very limited budget and fits a small, single-admin website. My understanding is that my basic options are thus: I can go with a hosting service that specifically provides Plone. I know of WebFaction, but what others exist? Also, I’d have two stipulations for a Plone hosting service: (a) It needs to use Plone 4, for which I’ve developed my site, and (b) it needs to allow me SSH access to a home directory (including the Plone configuration), so that I may use my custom development eggs and such. I could use a VPS hosting service. What are my options here? Again, I need something cheap and scaled to my level. I could use Amazon EC2 or a similar service (please tell me of any) and pay by the tiniest unit of data. I’m a little scared of this because I have no idea how to do a cost-benefit analysis between this and a regular VPS host. The advantage of this approach would be that I only pay for what I use, making it very scalable, but I don’t know how the overall cost would compare to any VPS host under similar circumstances. What factors enter into the cost of Amazon EC2? What can I expect to pay under either option for regular traffic for a new website? Which one is more desirable for when a rush of visitors drive up my bandwidth bill? One last note: I know Plone isn’t common for websites for individuals, but please don’t try to talk me out of it here; that’s a completely different subject. For now, assume I’m sticking with Plone for good. Also, I have seen the Plone hosting services list on Plone.org—it’s twenty pages long, and the first page was nothing but professional Plone consulting services that sometimes offer hosting for business clients. So, that wasn’t much help. Thank you!

    Read the article

  • SQLAuthority News – Download Whitepaper – SQL Server 2008 R2 Analysis Services Operations Guide

    - by pinaldave
    SQL Server Analysis Service (SSAS) has been always interesting subject for research. Analysis Services cubes are a very powerful tool in the hands of the business intelligence (BI) developer. They provide an easy way to expose even large data models directly to business users. Microsoft has published very informative white paper on Analysis Services Operations Guide. This white paper is authored by Thomas Kejser, John Sirmon, and Denny Lee. In this guide you will find information on how to test and run Microsoft SQL Server Analysis Services in SQL Server 2005, SQL Server 2008, and SQL Server 2008 R2 in a production environment. The focus of this guide is how you can test, monitor, diagnose, and remove production issues on even the largest scaled cubes. This paper also provides guidance on how to configure the server for best possible performance. It is the goal of this guide to make your operations processes as painless as possible, and to have you run with the best possible performance without any additional development effort to your deployed cubes. In this guide, you will learn how to get the best out of your existing data model by making changes transparent to the data model and by making configuration changes that improve the user experience of the cube. Download SQL Server 2008 R2 Analysis Services Operations Guide Note: Abstract taken white paper. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, SQL White Papers, SQLAuthority News, T SQL, Technology

    Read the article

  • Announcing Release of Oracle Solaris Cluster 4.1!

    - by user9159196
    Oct 26, 2012We are very happy to announce the release of  Oracle Solaris Cluster 4.1, providing High Availability (HA) and  Disaster Recovery (DR) capabilities for Oracle Solaris 11.1.  This is yet another proof of Oracle's continued investment in Oracle Solaris technologies such as Oracle Solaris Cluster. For this new release we have improved the Solaris Cluster integration within the Oracle environment. For example  we've created new agents such as PeopleSoft JobScheduler or added the support of the Oracle ZFS Storage Appliance replication in the Geo Edition module (to facilitate disaster recovery in multi-site configuration equipped with those types of storage.) We have also extended the Oracle Solaris Zone Cluster feature with support of Oracle Solaris 10 zone clusters and exclusive-IP to facilitate deployment of virtualized or cloud architecture.And there are many more new features to discover in this release. Stay tuned for more specific articles. In the mean time check out the What's new document or even better, download the latest version from  here.Also, join the Oracle Solaris 11 Online Event on November 7 where an entire session will be devoted to discussing Oracle Solaris Cluster 4.1. Our Oracle Solaris Cluster engineers will be on hand to respond to your questions. We look forward to your feedback and inputs! -Nancy Chow and Eve Kleinknecht 

    Read the article

  • How do I recover from a Linux CentOS 4.6 Operating System Crash

    - by Greg Omebije
    Our x86 Linux server running CentOS4.6 has crashed. The machine boots only to the Grub prompt. We have tried using the "rescue mode" to recover the System, but it hasn't worked. How can we fix this problem, so that the machine boots normally? How can we fix this problem to the point were we can recover our files from the server Our Linux Server Configuration: Dell PowerEdge 1950 Intel Xeon 2 HDD (146GB each) 4GB RAM Hardware and Software raid setup CentOS 4.6 We used Sysrecord to boot the computer: the following are the output of fdisk -l Disk /dev/sda: 293.3 GB, 292326211584 255 heads, 63 sectors/track, 35539 Cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x00000080 Device Boot Start End Blocks Id System /dev/sda1 * 1 13 104391 83 Linux /dev/sda2 14 17769 142625070 8e Linux LVM

    Read the article

  • Why doesn't Gradle include transitive dependencies in compile / runtime classpath?

    - by Francis Toth
    I'm learning how Gradle works, and I can't understand how it resolves a project transitive dependencies. For now, I have two projects : projectA : which has a couple of dependencies on external libraries projectB : which has only one dependency on projectA No matter how I try, when I build projectB, gradle doesn't include any projectA dependencies (X and Y) in projectB's compile or runtime classpath. I've only managed to make it work by including projectA's dependencies in projectB's build script, which, in my opinion does not make any sense. These dependencies should be automatically attached to projectB. I'm pretty sure I'm missing something but I can't figure out what. I've read about "lib dependencies", but it seems to apply only to local projects like described here, not on external dependencies. Here is the build.gradle I use in the root project (the one that contains both projectA and projectB) : buildscript { repositories { mavenCentral() } dependencies { classpath 'com.android.tools.build:gradle:0.3' } } subprojects { apply plugin: 'java' apply plugin: 'idea' group = 'com.company' repositories { mavenCentral() add(new org.apache.ivy.plugins.resolver.SshResolver()) { name = 'customRepo' addIvyPattern "ssh://.../repository/[organization]/[module]/[revision]/[module].xml" addArtifactPattern "ssh://.../[organization]/[module]/[revision]/[module](-[classifier]).[ext]" } } sourceSets { main { java { srcDir 'src/' } } } idea.module { downloadSources = true } // task that create sources jar task sourceJar(type: Jar) { from sourceSets.main.java classifier 'sources' } // Publishing configuration uploadArchives { repositories { add project.repositories.customRepo } } artifacts { archives(sourceJar) { name "$name-sources" type 'source' builtBy sourceJar } } } This one concerns projectA only : version = '1.0' dependencies { compile 'com.company:X:1.0' compile 'com.company:B:1.0' } And this is the one used by projectB : version = '1.0' dependencies { compile ('com.company:projectA:1.0') { transitive = true } } Thank you in advance for any help, and please, apologize me for my bad English.

    Read the article

< Previous Page | 318 319 320 321 322 323 324 325 326 327 328 329  | Next Page >