Search Results

Search found 29159 results on 1167 pages for 'xml configuration'.

Page 972/1167 | < Previous Page | 968 969 970 971 972 973 974 975 976 977 978 979  | Next Page >

  • Performance data collection for short-running, ephemeral servers

    - by ErikA
    We're building a medical image processing software stack, currently hosted on various AWS resources. As part of this application, we have a handful of long-running servers (database, load balancers, web application, etc.). Collecting performance data on those servers is quite simple - my go-to- recipe of Nagios (for monitoring/notifications) and Munin (for collection of performance data and displaying trends) will work just fine. However - as part of this application, we are constantly starting up and terminating compute instances on EC2. In typical usage, these compute instances start up, configure themselves, receive a job from a message queue, and then get to work processing that job, which takes anywhere from 15 minutes to over 8 hours. After job completion, these instances get terminated, never to be heard from again. What is a decent strategy for collecting performance data on these short-lived instances? I don't necessarily need monitoring on them - if they fail for whatever reason, our application will detect this and handle re-starting the job on another instance or raising the flag so an administrator can take a look at things. However, it still would be useful to collect information like CPU (user, idle, iowait, etc.), memory usage, network traffic, disk read/write data, etc. In our internal database, we track the instance ID of the machine that runs each job, and it would be quite helpful to be able to look up performance data for a specific instance ID for troubleshooting and profiling. Munin doesn't seem like a great candidate, as it requires maintaining a list of munin nodes in a text file - far from ideal for an environment with a high amount of churn, and for the short amount of time each node will be running, I'd rather keep the full-resolution data indefinitely than have RRD water down the data over time. In the end, my guess is that this will require a monitoring engine that: uses a database (MySQL, SQLite, etc.) for configuration and data storage exposes an API for adding/removing hosts and services Are there other things I should be thinking about when evaluating options? Perhaps I'm over-thinking this, though, and just ought to run sar at 1-minute intervals on these short-lived instances and collect the sar db files prior to termination.

    Read the article

  • Configure a Windows PC as network appliance w/o monitor, keyboard and mouse

    - by Joshua Lim
    I intend to use a small form factor PC with Windows 7 Professional installed as a network appliance attached directly to my customer's LAN without connecting a monitor, keyboard or mouse. How should I configure the networking for my PC so that I can access it via say my laptop? I figure that I can do it 2 ways. Attach my laptop to the PC using a crossover cable? Connect via RDP and configure networking. Configure an IP address on the PC before I deliver it to the customer place. At the customer's place, attach the PC to LAN and connect to the IP address which I previously configured from my laptop or from one of the customer's workstations. I know the first way is doable, but is the second way possible? I'm sorry if this question sounds ridiculous - I am Delphi programmer but a novice on networking. Finally, if possible, I hope to make the configuration process web based as I wouldn't like to reveal the fact that I am using Win7 Pro for the network appliance!

    Read the article

  • Windows is very slow with my new SSD

    - by Maksym H.
    I have a laptop HP probook 4520s with Core i5 M480 @ 2.67Ghz, 4Gb RAM, 640 GB HDD Radeon HD 6370m 1GB video card. It would seem like a good stack for work, right? But My HDD has crashed after everyday walking with laptop about 1 year. After buying my new SSD (Patriot memory - Torqx II 128 Gb SATA II) and installing new Windows 8 from scratch - it was amazing fast. But I had only install windows updates, and I feel that the speed become the same as my old HDD, after install other software for my work, it becomes so slow, so when I use my PC with old lower configuration and it really works better than my awesome laptop... I checked that TRIM and AHCI mode are turned on. So why's that? I asked for help in Patriot Memory support, they suggested to send them ATTO test results, done, sent. Here is the response: "Thank you very much for the attached results. Looking at the results, I can see that your SSD speed is a lot lower than it should be. Can you tell me your system specs?" Until they checked my email, I re-installed Windows 8 to Windows 7 and it was again perfect, but the story repeats it becomes slower and slower after every installing new software. Check out some screenshots.. (sorry for the screenshot with russian TaskManager, I hope you will recognize those parameters accordingly with your english or other lang TaskManager) So the main issue that something everytime loads the disc on 100% and the response time is jumping around 1000-3000 ms. Why am I asking about Windows? Because I tried to install Linux Mint (x86) and It just flies. So great performance independent on how many programs I have installed. Only Windows (any 7 or 8) has this problem. So guys, I appreciate any ideas about how to fix that and may be answers of main question - "why is it so.?" Thanks!

    Read the article

  • Router startup problem

    - by gfmoz
    I have problems with my Tilgin Vood Router. As I try to start my router by turning the power on (captain obvious), it generally doesn't work the first 3-4 times. This is getting very annoying. Five minutes after turning the power on the router's signal LEDs don't blink in the way they should do in a connected state. I can connect to my routers web configuration interface through my PC connected to it via LAN though I can't access the internet. It usually takes the router five minutes to get to the point where it should be connected to the internet but as it doesn't work the first times. So I turn on my router 3-5 times, let him work 5 minutes and then suddenly, after turning the pow*emphasized text*er off and on again it all works. The problem is regarding startup only, when I get it to work everything runs as smooth as a 1980-s text-based C++ game on a 3ghz machine. I also have to restart my PC too in order for everything to work. - How can I solve this problem? - Just leave the router turned on all time? I prefer a daily IP switch, though. - May the problem have something to do with my PC? There is another one connected to the router too and it doesn't work there either.

    Read the article

  • Cannot Connect to VSFTP outside of network

    - by jnolte
    I am having a hair pulling issue with my VSFTPD. I am not sure where to turn and I have went through to make sure everything is working properly and when trying to connect to ftp using ftp localhost I am able to login with the username and password I have specified. When I try to connect from outside I get the prompt Connected to domainname.com. but no prompt for user and password in addition when using an FTP client it hangs up and never connects. The server is running Ubuntu 12.04 LTS and VSFTPD 2.3.5 Here is the output of running iptables -L : http://pastie.org/4892233 Here is he output when running ps -FC vsftpd : root 14343 1 0 1168 984 3 16:55 ? 00:00:00 /usr/sbin/vsftpd Here is output of running netstat -tlpn | grep vsftpd : tcp6 0 0 :::21 :::* LISTEN 14343/vsftpd I have uninstalled and reinstalled many times and tried several different configurations and am at a complete loss on why this may not be working. We very often use the same configuration on the same type of servers with no issues. Thank you in advance for your help.

    Read the article

  • Server 2003 and XP Client; Why are HTTP connections being silently dropped.

    - by Asa Yeamans
    On my network, my edge-router, a windows 2003 r2 server router with all the latest updates, will drop packets, but only under specific circumstances. I have troubleshot and isolated it down to the most simple configuration i can. There is NO NAT involved. Only fully-public IP addresses. No Firewalls are running either, all ahve been disabled. no packet filters on any interfaces anywhere either. I have a single Windows XP virtual machine and my edge-router(the windows 2003 r2 server, and also a virtual machine) running on a windows 2008 x64 r2 system (running virtual server 2005 as i dont have Intel-VT compatible chip yet). The edge router can access any external http site just fine, no issues. However the windows XP machine is only able to access certain sites. These work: www.google.com www.txstate.edu www.workintexas.com www.thedailywtf.com . These Dont: www.yahoo.com www.utexas.edu en.wikipedia.org slashdot.org www.bing.com. I have removed all possibility of DNS issues by connecting with net-cat from the XP box and sending GET /\r\nHost: \r\n\r\n and that connection replicates the issue as well. The network setup: My statically assigned IP block: x.x.x.168/29 DSL Modem -----PPPoE Connection---- x.x.x.169[EdgeRouter] [EdgeRouter]x.x.x.170 -----Virtual Ethernet----- x.x.x.174 [Test2] Test2's Default gateway is x.x.x.170 and test2 can ping any and every valid, accessible, public IP address with no packet loss what-so-ever. If i connect directly over PPPoE from test2 (the XP box) everything works just fine... Im at my wits end, i have NO IDEA whats causing this.

    Read the article

  • How can I store logs and meet compliance requirements for free?

    - by Martin
    I am trying to keep long-term logs of an app in such a way, that it could plausibly demonstrated to third parties/court that the application has processed certain data at a given time. The data can be represented in XML or text format. A simple gzipped log is not plausible, as I may have added or modified data afterwards, whereas an external logging service would be an overkill. Cost is an issue, we are not dealing with financial data or so, but rather some simple user generated content, where some malicious users tried to blame the operator in the past when things escalated and went to court. My question: Is there some kind of signing software for Linux that signs each element of a log in such a way, that it can be easily shown that no element can be added or modified afterwards? Plug-Ins into some free Splunk Alternatives would be fine too. Ideally the software I am looking for should be under a GPL or similar license. I could probably achive something like this by using PGP/GPG sgning functions and including the previous elements signituares within the following element, but I would prefer to use some program where you do not have to argue about the validity of your own code. Note to mods: I am not asking this question on Stackoverflow, because I am not looking for writing own code for reasons described above. I think this question rather fits into serverfault than superuser, as server-side logging software is discussed rather here than on superuser.

    Read the article

  • Configure Apache + Passenger to serve static files from different directory

    - by Rory Fitzpatrick
    I'm trying to setup Apache and Passenger to serve a Rails app. However, I also need it to serve static files from a directory other than /public and give precedence to these static files over anything in the Rails app. The Rails app is in /home/user/apps/testapp and the static files in /home/user/public_html. For various reasons the static files cannot simply be moved to the Rails public folder. Also note that the root http://domain.com/ should be served by the index.html file in the public_html folder. Here is the config I'm using: <VirtualHost *:80> ServerName domain.com DocumentRoot /home/user/apps/testapp/public RewriteEngine On RewriteCond /home/user/public_html/%{REQUEST_FILENAME} -f RewriteCond /home/user/public_html/%{REQUEST_FILENAME} -d RewriteRule ^/(.*)$ /home/user/public_html/$1 [L] </VirtualHost> This serves the Rails application fine but gives 404 for any static content from public_html. I have also tried a configuration that uses DocumentRoot /home/user/public_html but this doesn't serve the Rails app at all, presumably because Passenger doesn't know to process the request. Interestingly, if I change the conditions to !-f and !-d and the rewrite rule to redirecto to another domain, it works as expected (e.g. http://domain.com/doesnt_exist gets redirected to http://otherdomain.com/doesnt_exist) How can I configure Apache to serve static files like this, but allow all other requests to continue to Passenger?

    Read the article

  • Linux Startup Script after Gnome Login

    - by Eric
    I have a Fedora server that I want to spawn an interactive python script after the user logs on. This script will ask the user for various types of information for configuring the system or it will search for the previous config file and show them the predefined information. Originally I was going to put this in rc.local or make it run with init.d but that messed up the boot due to how the script is spawned. So I would like this script to run as soon as the user logs in to Gnome. I've searched around quite a bit and found this answer which appears to be exactly what I want, but it isn't working the way I want it to. Below is my entry. [Desktop Entry] Name=MyScript GenericName=Script for initial configuration Comment=I really want this to work Exec=/usr/local/bin/myscript.sh Terminal=true Type=Application X-GNOME-Autostart-enabled=true Whenever I login, nothing happens. So I then did a test to modified "myscript.sh" to just echo some text to a file and it worked fine. So it appears the portion that isn't working is the script popping open a terminal and waiting for the users input. Are there any additional options I need to add to make this work? I can confirm when I run /usr/local/bin/myscript.sh from the CLI it works fine. I have also tried adding "StartupNotify=true" and still no luck. Edit @John - I tried moving my Exec= to /usr/local/bin/myscript-test and this is what myscript-test contains. #!/bin/bash xterm -e /usr/local/bin/myscript.sh Yet again, when I just run the myscript-test it works fine. However when I put that in my autostart, nothing happens. Edit 2 - I did a few more tests and it did start working but I had to remove Terminal=True before the xterm would pop. Thanks for your help.

    Read the article

  • XP/Intel wirelss only showing 'hpsetup' ad-hoc network that isn't there

    - by ewall
    Trying to help my friend with her work XP laptop, which recently stopped seeing any wireless SSIDs except the SSID 'hpsetup' (presumably from a wireless-enabled HP printer). Relevant information: The laptop is a Lenovo T500 (Centrino 2 chipset) with XP SP3. The network adapter is Intel WiFi Link 5300 AGN (built-in). The latest version (13.5) of the Intel drivers only are installed, not the Intel config software, so XP is using the Wireless Zero-Config manager. The wireless router is a NetGear WGR614 v7 with 802.11b/g. The SSID is broadcasting, and all the other laptops in the house can see and connect to it. On the laptop, I have tried repairing the network connection, disabling power management, turning off 802.11a & n radio, and more... but it didn't help. Some of the wireless settings are managed by Group Policy from her office (I get the "At least one of your changes was not applied successfully to your wireless configuration" message). It is enforced to connect to "Access point (infrastructure) networks only". The real kicker is that my laptop does not an SSID named 'hpsetup' here, but it can see several broadcasted SSIDs including the one we want, while my friend's laptop doesn't see any SSID except 'hpsetup'. Any suggestions?

    Read the article

  • Would like to change audio codec, but keep video settings with ffmpeg

    - by Craig Tataryn
    I have a video for which I'd like to convert the audio codec to AAC 320 kbps / 44.100 kHz. What would I use for ffmpeg switches such that all the video settings and codec remain the same, but only the audio codec and settings change? Here's my video: $ ffmpeg -i Winnipeg.rb\ Scala-Talk.mov FFmpeg version SVN-r25375, Copyright (c) 2000-2010 the FFmpeg developers built on Oct 6 2010 13:02:41 with gcc 4.2.1 (Apple Inc. build 5664) configuration: --enable-libmp3lame --enable-shared --disable-mmx --arch=x86_64 libavutil 50.32. 2 / 50.32. 2 libavcore 0. 9. 1 / 0. 9. 1 libavcodec 52.92. 0 / 52.92. 0 libavformat 52.80. 0 / 52.80. 0 libavdevice 52. 2. 2 / 52. 2. 2 libavfilter 1.48. 0 / 1.48. 0 libswscale 0.12. 0 / 0.12. 0 Seems stream 0 codec frame rate differs from container frame rate: 2000.00 (2000/1) -> 10.00 (10/1) Input #0, mov,mp4,m4a,3gp,3g2,mj2, from 'Winnipeg.rb Scala-Talk.mov': Metadata: major_brand : qt minor_version : 537199360 compatible_brands: qt Duration: 01:10:53.00, start: 0.000000, bitrate: 283 kb/s Stream #0.0(eng): Video: h264, yuv420p, 800x598, 94 kb/s, 10 fps, 10 tbr, 1k tbn, 2k tbc Stream #0.1(eng): Audio: adpcm_ima_qt, 22050 Hz, 1 channels, s16 Stream #0.2(eng): Audio: adpcm_ima_qt, 22050 Hz, 1 channels, s16 At least one output file must be specified Many thanks in advance! One with with ffmpeg I've never been able to grok is how to just "tweak" files without having to regurgitate every little setting for things you don't want changes.

    Read the article

  • How should a small company administer their web server?

    - by John Isaacks
    We currently have our website hosted by a small company that is actually a reseller for Rackspace. They act as our server administrators. They configured the servers, handle the backups, if there is a problem, we call them and they fix it. We are growing and want to move away from our shared server to either a cloud or dedicated server. I am thinking cloud myself but I am open to either. The current company doesn't seem to want to offer us anything more than a shared hosting plan. I looked into cloud solutions at vps.net, with them I would have to be the server administrator myself. I am the website programmer but administering the server is outside my comfort zone. vps.net does have a $99/month plan for Pro-Active Managed Support but I am not sure if this is the equivalent on a server admin that is there when you need them. We could hire someone in house, but I think that would be overkill for our needs. I am not exactly sure what we need, I do know we need as close to 100% uptime as we possible can. and we need the ability to add/remove/change the server configuration/software/etc. when needed (though changes shouldn't be very often once everything is setup right). Can someone point me in the right direction? What do other companies do?

    Read the article

  • Slowdown upon router/modem setup change

    - by Ollie Saunders
    I’ve been using a Belkin FSD7632-4 modem router to connect to my TalkTalk provided ADSL internet connection for some time and been pretty happy with it. Recently, however, the connection has been failing and I decided to get a ASUS RT-N16 instead, which is also a much more capable router generally. The ASUS RT-N16 doesn’t come with a modem built-in so I purchased as Zoom modem as well. I’ve set them both up and am using them to post this message. But I’m a bit miffed to find that I get a significantly and consistently slower downstream rate from the new configuration than with the old Belkin. Belkin modem router: downstream: 3.45 mbps upstream: 0.73 mbps ASUS router + Zoom modem: downstream: 2.71 mbps upstream: 0.66 mbps Any ideas why this is? The really weird thing about this is that the Zoom supports ADSL2 and ADSL2+ but I don’t think the old Belkin does. At first I thought it might be due to the Zoom modem being limited to PPPoE instead of PPPoA, which my ISP supports, but then I tried using PPPoE with the Belkin and that still gave a high speed. I’m using VC-Mux encapsulation with both. VPI of 0 and VCI of 38. I pulled this data off the Zoom: Mode: ADSL2 Line Coding: Trellis On Status: No Defect Link Power State: L0 Downstream Upstream SNR Margin (dB): 12.3 11.8 Attenuation (dB): 43.0 24.9 Output Power (dBm): 12.9 0.0 Attainable Rate (Kbps): 3936 844 Rate (Kbps): 3194 840 MSGc (number of bytes in overhead channel message): 59 10 B (number of bytes in Mux Data Frame): 99 14 M (number of Mux Data Frames in FEC Data Frame): 2 16 T (Mux Data Frames over sync bytes): 1 8 R (number of check bytes in FEC Data Frame): 8 8 S (ratio of FEC over PMD Data Frame length): 1.9833 9.0594 L (number of bits in PMD Data Frame): 839 219 D (interleaver depth): 32 2 Delay (msec): 15 4 Super Frames: 15808 14078 Super Frame Errors: 0 4294967232 RS Words: 513778 111753 RS Correctable Errors: 126 4294967238 RS Uncorrectable Errors: 0 N/A HEC Errors: 0 4294967279 OCD Errors: 0 0 LCD Errors: 0 0 Total Cells: 1920175 237597 Data Cells: 205993 392 Bit Errors: 0 0 Total ES: 0 0 Total SES: 0 0 Total UAS: 34 0

    Read the article

  • Creating multiple SFTP users for one account

    - by Tom Marthenal
    I'm in the process of migrating an aging shared-hosting system to more modern technologies. Right now, plain old insecure FTP is the only way for customers to access their files. I plan on replacing this with SFTP, but I need a way to create multiple SFTP users that correspond to one UNIX account. A customer has one account on the machine (e.g. customer) with a home directory like /home/customer/. Our clients are used to being able to create an arbitrary number of FTP accounts for their domains (to give out to different people). We need the same capability with SFTP. My first thought is to use SSH keys and just add each new "user" to authorized_keys, but this is confusing for our customers, many of whom are not technically-inclined and would prefer to stick with passwords. SSH is not an issue, only SFTP is available. How can we create multiple SFTP accounts (customer, customer_developer1, customer_developer2, etc.) that all function as equivalents and don't interfere with file permissions (ideally, all files should retain customer as their owner)? My initial thought was some kind of PAM module, but I don't have a clear idea of how to accomplish this within our constraints. We are open to using an alternative SSH daemon if OpenSSH isn't suitable for our situation; again, it needs to support only SFTP and not SSH. Currently our SSH configuration has this appended to it in order to jail the users in their own directories: # all customers have group 'customer' Match group customer ChrootDirectory /home/%u # jail in home directories AllowTcpForwarding no X11Forwarding no ForceCommand internal-sftp # force SFTP PasswordAuthentication yes # for non-customer accounts we use keys instead Our servers are running Ubuntu 12.04 LTS.

    Read the article

  • Running Flash on a headless Solaris box

    - by Marty Pitt
    Our build server is a Solaris box, and I'm trying to run a suite of FlexUnit tests as part of the automated build process. This works by compiling a swf movie with a suite of automated unit tests. The build script launches this movie, which automatically begins running the tests. Results of each test are sent back to the launching script across a port, and written out to a local xml file. Once the tests are completed, the movie closes down, and the build script interrogates the results to see if all the tests passed. The FlexUnit wiki provides information about how to to acheive this on a Unix server, by using Xvnc to provide a virtual space for the flash movie to run its tests in. I've provided this information through to our sys admin team, (along with the link to the article), and I've been told that because this is a Solaris box, we can't use that approach - Xvnc isn't supported on Solaris. Unfortunately, I know very little about servers, *nix vs Solaris, or Xvnc. Can someone please provide some advice about how we can achieve the same outcome on a Solaris box?

    Read the article

  • DD-WRT Acces Point as a Router

    - by Dzh
    Following suggestion to this question asked on Network Engineering, I am asking the question here. this is an extension to my previous question (I think it was deleted), where I was claiming that DDWRT was disabling it's DHCP server once connected to the network. I was wrong, as it now seems that it is bridging itself with another parallel connected wireless router. I have two Draytek 2820 and one Netgear WG602v3 with latest DDWRT. Lets call one wired-Draytek and it has wireless disabled. The other one, let's call it wireless-Draytek, is connected to wired-Draytek and has wireless with MAC filtering enabled. Once I connect Netgear to the wired-Draytek, the client that connects to Netgear, will be assigned with IP address from the wireless-Draytek. If the MAC address is not on the wireles-Draytek, the client is unable to obtain IP address and has no connectivity at all, even with manually assigned static IP configuration. To illustrate further, this is how network is set up: wired-Draytek ---------- wireless-Draytek \_________ Netgear What I wish to have, is that Netgear issues IP addresses from it's own IP pool and ignores the MAC filtering rules from wireless-Draytek. This is kind of puzzling how this they are bridging (if they are) themselves automatically. Thanks. UPDATE: It's not a home network. I gave you a bit simplified set-up. If there is a better site on Stack Exchange to ask this, please let me know. The Drayteks are running stock firmware, it's only Netgear that I've flashed to get more stability. In addition to these routers, I have also three 3COM Baseline switch 2824, and another Draytek router with Prosafe FS752TP PoE switch dedicated for VoIP phones. Wired-Draytek has IP 10.0.0.1, DHCP disabled as there is AD DC which is issuing IP addresses. Wireless-Draytek has IP 1.1.1.1 and DHCP enabled. Netgear has default - 192.168.1.1. As per suggestion, the specific question is - how do I isolate these two wireless routers?

    Read the article

  • Copy all installed programs & files in a hard disk (which has 32 bit Windows 7) and clone/transfer it to another computer which has 64 bit Windows 7

    - by galacticninja
    I recently got a new PC which has a 64-bit Windows 7 installed. The current PC that I am using has a 32-bit Windows 7 installed. I would like to know if there is a software that can copy all my installed programs and files in the hard disk with the 32-bit Windows 7 PC and transfer it to the newer PC's hard disk which has a 64 bit version of Windows 7. This is essentially like "cloning" a hard disk but I would like to use a 64-bit OS in the target drive, instead of also using the 32-bit OS of the source drive. I would like to do this I can avoid reinstalling and reconfiguring my installed programs and files again on the new PC. If possible, I would like the new PC to work as it was in my previous PC, with the installed programs, configuration and files intact except that the OS is now 64-bit and the hard disk has a larger capacity. I have heard of programs that can clone a hard disk, but my concern is that the 32-bit Windows 7 OS will also be cloned to the new 64-bit PC. If it is not possible to transfer my installed programs and settings like the way I described, are there software that can make it easier to migrate my installed programs, their configurations and my files from a 32-bit Windows 7 PC to a 64-bit Windows 7 PC? Details: I have a SATA to USB connector/adapter to copy files in the current hard disk to the newer one. The two PCs are connected through LAN, so I can also transfer files through LAN. Both PCs only have one hard disk.

    Read the article

  • How to reject messages to unknown user in sendmail cooperating with MS-Exchange?

    - by user71061
    Hi! I have an MS Exchange 2003 configured as a mail server for an organization. As this server is located in this organization internal network and I don't want to expose it directly over internet, I have second server - linux box with sendmail - configured as intelligent relay (it accept all messages from internet addressed to @my_domain, and forward it to internal Exchange serwer, and accepts all messages from this internal Exchange server and forward it over internet). This configuration work's fine, but I want to eliminate messages addressed to not exiting users as early as possible. Good solution could be Enabling on Exchange server function of filtering recipients together with "tar pitting", but in my case this dosn't solve problem, because before any message reach my Exchange server (which could eventually reject it), it has to be already accepted by sendmail server, sitting in front of this Exchange server. So, I want to configure my sendmail server in such a way, that during initial SMTP conversation it could query somehow my Exchange server checking whether recipient address is valid or not, and based on result of this query, accept or reject (possibly with some delay) incoming message in a very early phase. In fact, I have already solved this issue by writing my own, simple sendmail milter program which checks recipient address against text file with list of valid addresses. But this solution is not satisfying me any longer, because it requires frequent updates of this file, and due to lack of time/motivation/programming skills, I don't want to cope further with my source code, adding to it functionality of querying my Exchange server. Maybe I can achieve desired effect by configuring any component of already available linux software. Any ideas?

    Read the article

  • How to rename network printer on Windows 7?

    - by Adrian McCarthy
    This question is similar to How do you rename a printer device in Windows 7 64 bit, except the answers there do not work, and I'll provide more information. This is a home network, not a domain. I have set up a Brother HL-5170DN. It is a network printer connected directly to an Ethernet hub. I can connect to it with Windows 7, but on Windows 7 it defaults to the name "binary_p1 on Brn37415f", which isn't very useful. And I cannot seem to change the name. I have it working with several Windows XP and Vista machines, and I can change the name on those machines. On Windows 7 Printer properties: I can see the "binary_p1" name on the General tab. I can select the text, but I cannot change it. The field is not grayed out, but I cannot type anything into it. On the Ports tab, all of the controls are grayed out (disabled). The selected Port is called "\\Brn_37415f\binary_p1", and it's described as "Client Side Rendering Provider" and the printer field says "binary_p1". On the Security tab, I can see that my account has "Manage this printer" permissions. If I choose Printer Server Properties, I can select the port and click Configure Port, but I get a dialog that says, "An error occurred during port configuration. This option is not supported." I have found many forums with people asking the same question without getting an answer. Update: No more bounties to offer, but I'm still looking for a solution to this problem.

    Read the article

  • What is the best cloud technology to use for MongoDB/GridFS database servers

    - by Nerian
    We are going to launch a service that will require between 1 and 2 GB for file storage per paid user. I am going to use GridFS for storing files. GridFS is a module for MongoDB that allows to store large files in de database. I am pondering the different options for storing the database. But since I am unexperienced at deployment and it is my first time with Mongodb I need your experience. Criteria: I want to spend my time developing my core business, that is, my own application. I am a Ruby on Rails developer. I do not like to mess with server configuration. Hence, I would like a fully managed hosting solution. But I would like to know about any other option, if you think it is worth it. It should be able to scale. Cloud style. Pay as you go. The lower the price, the better. So far I known of these services: https://mongohq.com/pricing https://mongomachine.com/pricing https://mongolab.com/about/pricing/ http://cloudcontrol.com/add-ons/mongodb/ And they seem to be OK for common needs, that is no file storage. But I am going to use GridFS, so the size matters. These services seems to scale, in price, quite poorly. MongoHQ: The larger plan max storage is 20 GB. Seems like a very little storage, for GridFS. MongoMachine: Flat price, 2.5$ per GB. I didn't found the limit. Seems like a good price, comparing the others. MongoLab: 3.984 GB max, which I don't think I will hit, so perfect. 8$ per GB, quite costly. CloudControl: The larger plan is 20 Gb. The custom service starts at 250€ plus some unspecified charge per GB. What is your experience with these services? Any downtimes? Other possibilities? Edit: Added meaning of GridFS

    Read the article

  • Checkpoint VPN-1 R60 and Windows 7 64 Bit Client

    - by Mohit
    Hi All, As per my knowledge of checkpoint VPN-1. My company is using checkpoint VPN-1 R 60 ( I guess as I dont know how to check server version) Firewall(VPN Server). Now the problem is that I installed Windows 7 64 bit. But, after my research I found that there are not even one client (SecuRemote/SecuClient) for Win7 64 bit, when Firewall or server is R60. I thought of some open source solutions. Can you guys please suggest me some with the configuration required. As of now, I know the IP of the server. I know my username and password using which I connect and that is not my domain password. that i can confirm to you guys. I am not a network guy. I am more of a developer. But, I need some help in this.So, let me know if I can provide you more details. Please please i need urgent help on this.

    Read the article

  • nginx giving 404 when accessing php from alias directory

    - by code90
    I am trying to migrate from apache to nginx. The php sites that I am hosting need to access a shared library which turns out to be an alias directory. Below is the configuration I came up with. html files work fine, but php files giving 404. I have read through and tried most (if not all) of the answers to the similar questions with no any success. Any hint on what might be causing the issue in my case? location /wtlib/ { alias /var/www/shared/wtlib_4/; index index.php; } location ~ /wtlib/.*\.php$ { alias /var/www/shared/wtlib_4/; try_files $uri =404; if ($fastcgi_script_name ~ /wtlib(/.*\.php)$) { set $valid_fastcgi_script_name $1; } fastcgi_pass 127.0.0.1:9013; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /var/www/shared/wtlib_4$valid_fastcgi_script_name; fastcgi_param REDIRECT_STATUS 200; include /etc/nginx/fastcgi_params; } Thanks all ! Update: Following seems to be working fine: location /wtlib/ { alias /usr/share/php/wtlib_4/; location ~* .*\.php$ { try_files $uri @php_wtlib; } location ~* \.(html|htm|js|css|png|jpg|jpeg|gif|ico|pdf|zip|rar|air)$ { expires 7d; access_log off; } } location @php_wtlib { if ($fastcgi_script_name ~ /wtlib(/.*\.php)$) { set $valid_fastcgi_script_name $1; } fastcgi_pass $byr_pass; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /usr/share/php/wtlib_4$valid_fastcgi_script_name; fastcgi_param REDIRECT_STATUS 200; include /etc/nginx/fastcgi_params; }

    Read the article

  • Move OS from RAID5 array to RAID 1 arrays

    - by Antoine
    I want to give a last boost to my old ProLiant ML350 G5 server which just needs to be reliable for a few more year only ! With a defined budget of about 1500$ (I do not have more), i plan to replace the CPU (+ adding a second one), the battery cache of my raid controller (E200i), double the RAM, and change all hard drives. I have 7 HDD (SAS 10krpm, 72Gb) + 1 spare in RAID5, and my system is all FULL (no empty tray, full disks). in my current RAID5 array, I have 2 partitions: - 1 OS partition, 20Gb - 1 data partition, 350 Gb I plan to replace these 8 disks with : - 2 x 300Gb SAS 15krpm in RAID 1 (= 1 partition for OS) - 2 x 2Tb SATA 7.2krpm in RAID 1 (= 1 partition for DATA) My biggest constraint is that I have only 01 day to upgrade my server. Therefore, I'm looking for cloning all my files (OS + data partition) to my new arrays, i.e : - the OS partition shall be cloned to the RAID1 "2x300Gb array" - the data partition shall be cloned to the RAID1 "2x2Tb array" My second problem is that I need to physically remove all the old hard drives before inserting the new ones. I'm running Windows Server 2003 R2, and even if MS support will expire soon, I cannot buy a new licence and spent time in configuration. Obviously, with 1500$, I cannot also buy a new server that I could start configuring from now ! Thought about ASR (NTBackup), but I have no floppy drive (and do not really want to invest in one !) Thought about a clonezilla clone, and read this interesting link : Windows Server 2003 - move C: partition to a new SAS disk , but i'm not so confident in using Clonezilla with RAID5. What should be the best option to quickly and easily (if possible!) "copy/paste" my OS (so no need to reinstall and reconfigure all) and DATA / programs / services, etc... ? Thanks for your comments

    Read the article

  • git : The remote end hung up unexpectedly - too many simultaneous users?

    - by Pritam Barhate
    I asked this first on StackOverflow and I was suggested that I should ask it here: We have a self hosted git server (Gitolite) on a VPS account (CPU:2.68GHz RAM:1824MB). This same VPS is also used to publish our underdevelopment web apps for client demos. (Very little traffic). so the main use of the server is as a Git Server Only. This git server is accessed by a team of 30-40 people for various projects. Our problem is that during the day when 6-7 people are trying to access the server (sometimes same repo) we get frequent error message: ssh: connect to host xxx.xxx.xx.xx port 22: Bad file number fatal: The remote end hung up unexpectedly After trying for 10-15 minutes it generally succeeds. During early mornings and late nights when there are only 1-2 people, git commands work with 100% success rate. Also I would like to note that if I access the other file hosted on the server through HTTP it works fine. I found a couple of questions on StackOverflow and on other sites regarding this. But most of the people point towards SSH key set up or conflicts between Msysgit and Cygns SSH. However I don't think this is the problem in our case as we get this behavior on Windows (using msysgit only) as well as Mac Machines. Also if it was SSH configuration issue then it shouldn't work at all. But in our case it works after 10-15 minutes. I think in our case it might be too many simultaneous connections to same server (or same repo) or something like that. Does there exists a setting or a conf file that needs to modified to solve this problem? Please help me solve this problem or point me in the right direction. Thanks in advance. Pritam.

    Read the article

  • Iptables rules, forward between two interfaces

    - by Marco
    i have a some difficulties in configuring my ubuntu server firewall ... my situation is this: eth0 - internet eth1 - lan1 eth2 - lan2 I want that clients from lan1 can't communicate with clients from lan2, except for some specific services. E.g. i want that clients in lan1 can ssh into client in lan2, but only that. Any other comunication is forbidden. So, i add this rules to iptables: #Block all traffic between lan, but permit traffic to internet iptables -I FORWARD -i eth1 -o ! eth0 -j DROP iptables -I FORWARD -i eth2 -o ! eth0 -j DROP # Accept ssh traffic from lan1 to client 192.168.20.2 in lan2 iptables -A FORWARD -i eth1 -o eth2 -p tcp --dport 22 -d 192.168.20.2 -j ACCEPT This didn't works. Doing iptables -L FORWARD -v i see: Chain FORWARD (policy DROP 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 33 144 DROP all -- eth1 !eth0 anywhere anywhere 0 0 DROP all -- eth2 !eth0 anywhere anywhere 23630 20M ACCEPT all -- any any anywhere anywhere state RELATED,ESTABLISHED 0 0 ACCEPT all -- eth1 any anywhere anywhere 175 9957 ACCEPT all -- eth1 any anywhere anywhere 107 6420 ACCEPT all -- eth2 any anywhere anywhere 0 0 ACCEPT all -- pptp+ any anywhere anywhere 0 0 ACCEPT all -- tun+ any anywhere anywhere 0 0 ACCEPT tcp -- eth1 eth2 anywhere server2.lan tcp dpt:ssh All packets are dropped, and the count of packets for the last rule is 0 ... How i have to modify my configuration? Thank you. Regards Marco

    Read the article

< Previous Page | 968 969 970 971 972 973 974 975 976 977 978 979  | Next Page >