Search Results

Search found 25811 results on 1033 pages for 'visual studio 2008'.

Page 776/1033 | < Previous Page | 772 773 774 775 776 777 778 779 780 781 782 783  | Next Page >

  • Can I use iptables on my Varnish server to forward HTTPS traffic to a specific server?

    - by Dylan Beattie
    We use Varnish as our front-end web cache and load balancer, so we have a Linux server in our development environment, running Varnish with some basic caching and load-balancing rules across a pair of Windows 2008 IIS web servers. We have a wildcard DNS rule that points *.development at this Varnish box, so we can browse http://www.mysite.com.development, http://www.othersite.com.development, etc. The problem is that since Varnish can't handle HTTPS traffic, we can't access https://www.mysite.com.development/ For dev/testing, we don't need any acceleration or load-balancing - all I need is to tell this box to act as a dumb proxy and forward any incoming requests on port 443 to a specific IIS server. I suspect iptables may offer a solution but it's been a long while since I wrote an iptables rule. Some initial hacking has got me as far as iptables -F iptables -A INPUT -p tcp -m tcp --sport 443 -j ACCEPT iptables -A OUTPUT -p tcp -m tcp --dport 443 -j ACCEPT iptables -t nat -A PREROUTING -p tcp --dport 443 -j DNAT --to 10.0.0.241:443 iptables -t nat -A POSTROUTING -p tcp -d 10.0.0.241 --dport 443 -j MASQUERADE iptables -A INPUT -j LOG --log-level 4 --log-prefix 'PreRouting ' iptables -A OUTPUT -j LOG --log-level 4 --log-prefix 'PostRouting ' iptables-save > /etc/iptables.rules (where 10.0.0.241 is the IIS box hosting the HTTPS website), but this doesn't appear to be working. To clarify - I realize there's security implications about HTTPS proxying/caching - all I'm looking for is completely transparent IP traffic forwarding. I don't need to decrypt, cache or inspect any of the packets; I just want anything on port 443 to flow through the Linux box to the IIS box behind it as though the Linux box wasn't even there. Any help gratefully received... EDIT: Included full iptables config script.

    Read the article

  • Connect to Nonencrypted Wireless Network Using Ubuntu Commands

    - by Tim
    I failed to connect to an open i.e. nonencrypted wireless network using Ubuntu command lines. Here is what I did: $ sudo /etc/init.d/NetworkManager stop * Stopping network connection manager NetworkManager [ OK ] $ sudo /sbin/ifconfig wlan0 up $ sudo iwconfig wlan0 essid "Cavalier High-Speed 866-4-CAVTEL" $ sudo dhclient wlan0 There is already a pid file /var/run/dhclient.pid with pid 10812 killed old client process, removed PID file Internet Systems Consortium DHCP Client V3.1.1 Copyright 2004-2008 Internet Systems Consortium. All rights reserved. For info, please visit http://www.isc.org/sw/dhcp/ wmaster0: unknown hardware address type 801 wmaster0: unknown hardware address type 801 Listening on LPF/wlan0/00:0e:9b:cd:4e:18 Sending on LPF/wlan0/00:0e:9b:cd:4e:18 Sending on Socket/fallback DHCPREQUEST of 192.168.1.67 on wlan0 to 255.255.255.255 port 67 DHCPREQUEST of 192.168.1.67 on wlan0 to 255.255.255.255 port 67 DHCPDISCOVER on wlan0 to 255.255.255.255 port 67 interval 7 DHCPDISCOVER on wlan0 to 255.255.255.255 port 67 interval 7 DHCPDISCOVER on wlan0 to 255.255.255.255 port 67 interval 8 DHCPDISCOVER on wlan0 to 255.255.255.255 port 67 interval 12 DHCPDISCOVER on wlan0 to 255.255.255.255 port 67 interval 21 DHCPDISCOVER on wlan0 to 255.255.255.255 port 67 interval 6 No DHCPOFFERS received. Trying recorded lease 192.168.1.67 PING 192.168.1.1 (192.168.1.1) 56(84) bytes of data. --- 192.168.1.1 ping statistics --- 1 packets transmitted, 0 received, +1 errors, 100% packet loss, time 0ms Trying recorded lease 192.168.1.45 PING 192.168.1.1 (192.168.1.1) 56(84) bytes of data. --- 192.168.1.1 ping statistics --- 1 packets transmitted, 0 received, +1 errors, 100% packet loss, time 0ms No working leases in persistent database - sleeping. $ sudo /sbin/iwconfig wlan0 wlan0 IEEE 802.11bg Mode:Managed Frequency:2.422 GHz Access Point: Not-Associated Tx-Power=27 dBm Retry min limit:7 RTS thr:off Fragment thr=2352 B Encryption key:off Power Management:off Link Quality:0 Signal level:0 Noise level:0 Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:0 Invalid misc:0 Missed beacon:0 I was wondering what the problem is and how I can do it right? Thanks and regards!

    Read the article

  • Hyper-V vss-writer not making current copies

    - by Martinnj
    I'm using diskshadow to backup live Hyper-V machines on a Windows 2008 server. The backup consists of 3 scripts, the first will create the shadow copies and expose them, the second uses robocopy to copy them to a remote location and the third unexposes the shadow copies again. The first script – the one that runs correctly but fails to do what it's supposed to: # DiskShadow script file to backup VM from a Hyper-V host # First, delete any shadow copies of the drives. System Drives needs to be included. Delete Shadows volume C: Delete Shadows volume D: Delete Shadows volume E: #Ensure that shadow copies will persist after DiskShadow has run set context persistent # make sure the path already exists set verbose on begin backup add volume D: alias VirtualDisk add volume C: alias SystemDrive # verify the "Microsoft Hyper-V VSS Writer" writer will be included in the snapshot # NOTE: The writer GUID is exclusive for this install/machine, must be changed on other machines! writer verify {66841cd4-6ded-4f4b-8f17-fd23f8ddc3de} create end backup # Backup is exposed as drive X: make sure your drive letter X is not in use Expose %VirtualDisk% X: Exit The next is just a robocopy and then an unexpose. Now, when I run the above script, I get no errors from it, except that the "BITS" writer has been excluded because none of its components are included. That's okay because I really only need the Hyper-V writer. Also I double checked the GUID for the writer, it's correct. During the time when the Hyper-V writer becomes active, 2 things will happen on the guest machines: The Debian/Linux machine will go to a saved state and restore when done, all fine. The Windows guests will "creating vss snapshop-sets" or something similar. Then X: gets exposed and I can copy the .vhd files over. The problem is, for some reason, the VHD files I get over seems to be old copies, they miss files, users and updates that are on the actual machines. I also tried putting the machines in a saved sate manually, didn't change the outcome. I hope someone here has an idea of how to solve this.

    Read the article

  • Splunk is fantastically expensive: What are the alternatives?

    - by samsmith
    This has been discussed, but it has been several months, so it may be time to revisit it: Earlier discussion RE Splunk alternatives For the record, Splunk rocks. But the pricing is simply beyond what we can consider (When I spoke with Splunk today, the cost for a system to index 5gb/day of data is over $30,000.) That is more than we spend on SQL Server (by a large multiple), more than we spend on a rack of servers (by a multiple), etc. etc. The splunk sales team is correct (that for $30K we get more value and functionality than if we spend the same building our own system), but it doesn't matter. The splunk cost is simply too high (by a multiple). Soooooo, we are looking around! Is anyone out there building a splunk like system? Our basic need: Able to listen for syslog messages on multiple udp ports Able to index the incoming data in an async way Some kind of search engine Some kind of UI An API to the search engine (to embed in our console) We currently need to index 3-5gb/day, but need to be able to scale to 10gb/day or more. We do not need a lot of history (30 days is fine). We use Windows 2008 and 2003 servers. Thanks for your thoughts!

    Read the article

  • Kindly guide me to buy a new laptop [on hold]

    - by Its me 007
    I am from India. I want to buy a new laptop. Shortlisted few but confused between which processor,Chip set and Graphics will be the best suited for my requirements. NOTE: NOT ABLE TO POST THE LINKS YOU WILL HAVE TO COPY PASTE IT. SORRY. 1) HP Pavilion 15-N004TX - 4th Gen CI5 - 4200U/4GB RAM/500 GB HDD/ 1GB Radeon Graphic - Rs 39990 www.homeshop18.com/hp-pavilion-15-n004tx-laptop-4th-gen-intel-core-i5-4200u-4gb-500gb-15-6-linux-silver-black/computers-tablets/laptops/product:30989197/cid:16317/ 2) Lenovo Essential G510 (59-398452) - 4th Gen Ci5 4200M/ 4GB/ 500 GB/Win8/2GB Graph ATI Sunpro 8570 - Rs 44969 www.flipkart.com/lenovo-essential-g510-59-398452-laptop-4th-gen-ci5-4gb-500gb-win8-2gb-graph/p/itmdp26eprwf5k5v?gclid=CMnh99GA2LoCFaRU4godNiUAGQ&semcmpid=sem_7847244212_laptopsnew_goog&tgi=sem%2C1%2CG%2C7847244212%2Cg%2Csearch%2C%2C24387103114%2C1t1%2Cb%2C%2Blenovo+%2Bg510%2F59+%2B398452%2Cc%2C%2C%2C%2C%2C%2C%2C2 3) HP Pavilion G6-2303TX Laptop (3rd Gen Ci5 3230M/ 4GB/ 500GB/ DOS/ 1GB Graph) - Rs 40500 www.flipkart.com/hp-pavilion-g6-2303tx-laptop-3rd-gen-ci5-4gb-500gb-dos-1gb-graph/p/itmdm6yzh4gr4cxd?pid=COMDM6YHWMGDRDEZ&ref=1d2b85fc-a03d-4c7d-844b-ec9e8dc95a81 4) HP Pavilion 15-E039TX Laptop (3rd Gen Ci5 3230M/ 4GB/ 1TB/ Win8/ 2GB Graph) - Rs 46690 www.flipkart.com/hp-pavilion-15-e039tx-laptop-3rd-gen-ci5-4gb-1tb-win8-2gb-graph/p/itmdn4d9wykhdcpz?pid=COMDN4CZGFMGJNTN&ref=1d2b85fc-a03d-4c7d-844b-ec9e8dc95a81 Now I am confused between: Which Processor and chipset is best? How much graphic card is enough? (Not a gamer) Is any of this laptop future proof i.e. it should at least support upcoming latest programming softwares which eats more processor and memory. Laptop will be mainly used for multiprocessing.It should be at least capable for following: Visual Studio 2012 and the upcoming versions for at least 4 years SQL server 2008 R2 and above Sharepoint Blend Photoshop Kindly suggest. If anyone know any good laptop with good configuration in the 50k budget kindly suggest. Thanks in advance.

    Read the article

  • Getting prompted for password accessing page through script even when client and server are in same

    - by Munawar
    I'm trying to pull up an internal webpage in automated fashion using the methods in 'Internetexplorer.Application' using vbscript. But I'm getting prompted for password, although the client and the server both are in the same domain. Predictably when I manually try to access the web page, I don't have any problem. Only when I try using cscript.exe or iexplore.exe, I get prompted. I'm trying to automate some of the smoke test we do after a new build is deployed. But this password prompt is getting in the way. Following are the system specs Client machine - IE 7.0, OS is Windows server 2003 Server machine - Windows Server 2008 Both are in the same domain. So far I've unsuccessfully tried following to automate the password input system.diagnostics.process.start var WinHttpReq = new ActiveXObject("WinHttp.WinHttpRequest.5.1"); WinHttpReq.Open("GET", "http://website", false); WinHttpReq.SetCredentials("username", "password", 0); Nothing seems to work I checked in IIS. we have only anonymous and forms authentication enabled Is there any configuration setting in the client machine that can be tweaked to bypass this, although I'd hate to do it since you step on the toes of twenty people trying to do that. Preferable way would be to programmatically input it if its possible. Also, if you can suggest a more appropriate forum, that'd be great too. Please help.

    Read the article

  • Can a folder on a NAS be made available as a physical drive in VMWare?

    - by asbjornu
    We are currently in the process of moving from a single web server to two load balanced web servers and are facing some challenges we don't quite know how to fix. One of these is that the current single server hosts applications that write stuff to disk. The applications running on the server expects that when something is written to disk it later will in fact exist, so it's important that this premise is fulfilled with the dual server architecture as well. The dual server setup is a couple of VMWare instances with Windows Server 2008 R2 as the guest operating system. Out of the box, these instances does not share any kind of file system, so just moving the applications over would make them break since one instance would write something to the file system that doesn't exist on the other. Thus we need to share a file system between the two virtual servers. Our host has proposed to create a network share on a SAN and map this share individually on each virtual machine. This doesn't work too well due to NTFS permissions, etc., because the share needs to be accessed by several independent web applications that won't even be in the same application pool. The only solution that kind of works is to hard code an "identity" for each web application into its web.config file, but this means password in clear text which doesn't sit well with me. Since the servers are virtual, I'm thinknig: Wouldn't it be possible to make a NAS area available as a physical disk in the gues operating system somehow? Since VMWare has full control of the virtual hardware, you'd think it would be able to "fake" a local hard drive in the virtual machine that in reality is a folder on a NAS, but so far I haven't found anything that states how and if this is possible. So I have to ask the wonderful Server Fault community: Can a folder on a NAS be made available as a physical drive (typical D:) in both of the virtual machines?

    Read the article

  • What is the proper way of debugging a slow Windows installation?

    - by Niklas
    You know the drill - you've been asked to check why you cousin's computer is running slow. I was there yesterday. Being a Mac user since 2007 I haven't really dug deep in Windows internals in the past five years. Googling for answers reveals many, many different answers: broken registry, spyware, antivirus program, fragmented disk, turning of visual effects etc. In this particular case I was asked to look at a two year old HP laptop with Vista. Windows was running incredibly slow and even opening up a new explorer window took almost a minute. I ended up doing everything of the above: running cc cleaner, defragmenting the disk, turning off visual effects, turning off norton and a bunch of other things people believe have an impact on Windows performance. Now I'd like to understand this in depth. Is there a proper, "scientific" if you so will, way of debugging and understanding where the problem with a slow running Windows installation lies? (In my particular case this concerned Windows Vista but let's try to create general guide for XP and Windows 7 too). To me, it seems wrong to just run a bunch of different tools without understanding the underlying cause of the error.

    Read the article

  • IIS 7.5 truncating POST body containing JSON data with ASP.NET MVC 3

    - by Guneet Sahai
    I'm facing a problem which I hope is a configuration thing with IIS but is right now giving a lot of trouble. Basically I have a controller that accepts a JSON and does some processing. While it generally works fine, but every now and then when the system has some load I get an error. After some painful debugging, we figured the incoming JSON gets truncated which causes the deserialzer to fail. To narrow down the problem - we wrote a simple controller that accepts a JSON and tries to deserialize it. In case it fails it just logs it. This works fine but when I hit it using a load testing tool (JMeter) it throws the same error (truncation) for a few requests. The # of failures increased when I increase parallel connections. It starts showing with 150 concurrent requests. We are running IIS 7 on windows 2008 server with ASP.Net MVC 3 with more or less default configuration of IIS. More information available in my question below http://stackoverflow.com/questions/12662282/content-length-of-http-request-body-size

    Read the article

  • How can I proxy multiple LDAP servers, and still have grouping of users on the proxy?

    - by Chris
    I have 2 problems that I'm hoping to find a common solution to. First, I need to find a way to have multiple LDAP servers (Windows AD's across multiple domains) feed into a single source for authentication. This is also needed to get applications that can't natively talk to more than one LDAP server to work. I've read this can be done with Open LDAP. Are there other solutions? Second, I need to be able to add those users to groups without being able to make any changes to the LDAP servers I'm proxying. Lastly, this all needs to work on Windows Server 2003/2008. I work for a very large organization, and to create multiple groups and have large numbers of users added to, moved between, and removed from them is no small task. This normally requires tons of paperwork and a lot of time. Time is the one thing we don't normally have; dodging the paperwork is just a plus. I have very limited experience in all this, so I'm not even sure what I'm asking will make sense. Atlassian Crowd comes close to what we need, but falls short of having it's own LDAP front end. Can anyone provide any advice or product names? Thanks for any help you can provide.

    Read the article

  • How should I perform database maintenance on a 24x7 system

    - by solublefish
    I'm a software developer who inherited a part-time DBA role. I'm responsible for an application backed by a small, high-volume 24x7 database on SQL Server 2008. While there's other stuff in the DB, the critical piece is a 50GB, 7.5M row table that serves 100K requests/sec during peak load, and about half that at "night". This is 99%+ read traffic, but the writes are constant, and required. I need to be able to perform periodic maintenance without a maintenance window. Say an index rebuild, a job to purge old data, Windows Update, or hardware upgrade. Most of the advice I've seen is along the lines of "MAKE a maintenance window." While I appreciate the sentiment, I hope there's another way. If it will solve this problem, I do have the ability to purchase new hardware or modify the database, the clients (a set of web services servers), and much of the application code (ADO.NET + ASP.NET). I've been thinking along the lines of using the warm spare (or a 3rd server) to do the maintenance, and then "swap" it into production. 1 Synchronize the spare by restoring backups, including a current transaction log. 2 Perform the maintenance tasks. 3 Reconfigure clients to connect to the spare server. Existing connections are finished within a minute or so. 4 The spare server is now the production server. The problem remaining is that the new production server is now out of date by however long it took to perform maintenance. Is there some way that the original production server can be made to queue up changes and merge them to the spare between steps 2 and 3? Any other ideas?

    Read the article

  • I deployed Flash Player via a Software Installation policy. How to upgrade?

    - by eleven81
    I have a Windows Server 2008 machine as my DC. Earlier this year I created a Software Installation GPO to deploy Adobe Flash Player plugin MSI. I assigned the policy to the computers, about half run Windows XP x86 and the other half Windows 7 x64. That all works like clockwork. When I created the Software Installation Policy, I disabled the Flash Player plugin's automatic update feature by editing the MSI in Orca. I did this because I wanted all of my machines to run the exact same version of the plugin. Now, some time has passed and a newer version of the Flash Player plugin has been released. It is time for me to push out the updated version of the plugin. I already have the new MSI, but I am lost on what to do next. I see the upgrades tab in the Software Installation GPO, but everything there reads like that would be used for add-ons to a larger master program and not for updates that are released over time. I have read that it is best to create a new Software Installation policy with the new MSI, revoke the old GPO, and assign the new GPO. I feel as though, over time, I will wind up with more revoked policies than active ones. I have also read that some people have had success by replacing the old MSI with the new MSI and simply telling the GPO to redeploy. This seems like a backdoor method that will only get me in to trouble. In short, what is the correct, best-practice, or preferred way to roll out the new version via Group Policy?

    Read the article

  • Method to integrate Powershell scripts with non-Windows workflow?

    - by Matt Simmons
    I love the smell of new machines in the morning. I'm automating a machine creation workflow that involves several separate systems across my infrastructure, some of which involve 15 year old perl scripts on Solaris hosts, PXE Booting Linux systems, and Powershell on Windows Server 2008. I can script each of the individual parts, and integrating the Linux and Unix automation is fairly straightforward, but I'm at a loss as to how to reliably tie together the Powershell scripts to the rest of the processes. I would prefer if the process began on a Linux host, since I imagine that it will end up as a web application living on an Apache server, but if it needs to begin on Windows, I am hesitantly okay with that. I would ideally like something along the lines of psexec for Linux to run against Windows, but the answer in that direction appears to by Cygwin, and as much as I appreciate all of the hard work that they put in, it has never felt right, if you know what I mean. It's great for a desktop and gives a lot of functionality, but I feel like Windows servers should be treated like Windows servers and not bastardized Unix machines (which, incidentally, is my argument against OSX servers, too, and they're actually Unix). Anyway, I don't want to go with Cygwin unless that's the last and only option. So I guess what I'm asking is if there is a way to execute jobs on Windows machines from Linux. Without Cygwin. I'm open to ideas and suggestions, including "Look idiot, everyone uses Cygwin, so suck it up and deal with it". Thanks in advance!

    Read the article

  • Virtualize SBS 2003 - P2V vs migrating to new VM

    - by jlehtinen
    I need to virtualize a SBS 2003 server in my work environment. I need some tips on what people think is the best way to proceed. Background: The SBS 2003 server is the primary DC for the domain and also hosts FTP, RRAS(VPN), DNS, and file shares. Exchange is NOT used, neither is SQL server. DHCP is done via a firewall appliance. I have added a Server 2003 VM to the domain and promoted it to the DC role. AD/DNS is replicating here correctly. This was mainly done to provide fault-tolerance to the domain, I was not intending to make this VM the primary DC. I've already asked about buying upgraded licensing for Server 2008/2012 but was refused due to cost. Options: I see (at least) two routes I could take to complete this. From what I've read option 2 is the "preferred" method, but there's a few steps where I'm not clear on what to expect. Option 1.) P2V the primary DC Power off primary DC Power off secondary DC (to prevent USN rollback in case P2V has issue) P2V (cold clone) primary DC Boot new PDC VM Allow new hardware to detect Remove old NIC hardware from device manager Assign old IPs to new virtual NICs Reboot PDC VM, confirm connectivity and no major issues Power on secondary DC, confirm replication Option 2.) Create new VM, transfer roles, remove original DC from domain Create new VM, install SBS 2003 Do I need the original SBS install discs for this? MS migration doc mentions this. Add VM to domain, promote to DC role Does this start 7 day timer where two SBS servers can be in same domain? Set up RRAS on new VM Set up IIS/FTP on new VM Move file shares to new VM Transfer FSMO roles to new VM DC dcpromo original primary DC out of domain

    Read the article

  • Windows 7 caches FTP credentials?

    - by Martin Booka Weser
    On my remote maschine i have an iis 7.5 (win server 2008) and set up an ftp site with iis manager authentication. I then did active directory user isolation and isolated my users to physical folders according to their names. So far, so good. I can access with ftp cliens from everywhere with different test accounts that i previously set up in the iis manager auth. Every user connects to its own folder. When i now tested with windows 7 as a client i did the following. Explorer - computer - right click - add network address - the ip of my remote maschine - user1 - password1 Perfect - it works. I now want to connect with user2. So I deleted this network address and set up a new connection, but with user2 (or even anonymous) instead. Now the strange thing: Windows doesn't even ask me for a password again. It just connects me to the folder of the user1. I already disabled ftp caching in the IIS and i disabled the user1 account in IIS manager authentication! Still, if i set up a network connection with this windows 7 it connects to the folder user1 . No matter which username i use (anonymous, administrator, user2,...). And if i connect with other ftp clients or other computers it all works perfectly. So I assume that this one windows somehow caches the credentials... But then, why does the IIS still accepts this credentials even if i disabled this user1 account??? Thanks.

    Read the article

  • Network connection keeps dropping - bad hardware?

    - by Bill Sambrone
    Hello all, I've into a bit of a wall with a client of mine. In an office of 20 people, he is the only one who experiences broken connections to his mapped network drives. I have everyone set up with about 6 mapped drives, all pointing to the same server (no DFS), and everyone else can access them lightning fast. The environment consists of a mix of Windows 7 and XP machines, all 32-bit. The server holding the data everyone is mapping to is running on Server 2008 R2, and is a domain controller. We recently swapped out their old 10/100 switch for a shiny new Dell PowerConnect gigabit switch. We have also replaced an old dying Sonicwall with a shiny new one. Everything is running on an ESX host except for the DC, where everyone is getting data from. In my client's office, we have done the following: Swapped out his computer (Win7 and XP box) Swapped out the desktop switch in his office Removed the desktop switch in his office Changed out the network cable going to the wall Ran 'net config server /autodisconnect:-1' on the server Disabled remote differential compression on his current Win7 box When we swapped out his network cable, everything seemed fine for about 4 days. Normally I would get a phone call a couple times per day letting me know that Outlook has crashed (there is a 9GB PST living on the server he is always connected to), or that his software he is running from his L drive has crashed. I almost thought I had this solved, but after we rebooted the DC the other night he all of a sudden couldn't stay connected to his mapped network drives for more than 10 minutes. When I ran 'net use' from the command prompt, it listed all the network drives where were randomly in a state of 'OK', 'Disconnected', or 'Reconnecting'. What else should I try? Maybe there is bad wiring in the wall, patch panel, or a bad port in the new switch I have in the server room?

    Read the article

  • IIS replication - Is it possible

    - by Ian
    Hi All, I have a requirement for a client that I have a centralised system that all his satellite branches can work on. Currently this is a ASP.net web forms app running under IIS 7 on win 2008 RC 2 using an SQL backend. The client has now requested that each branch have a local server, so that in the event that the internet connection is down, the branches productivity does not suffer. His other request is that everything can be updated via the central hub and using some mechanism the updates filter down to the individual sites. What are my options here? I see the following as possible options: Multiple redundant internet connections controlled by load balancers SQL replication for the DB (What is better, snapshot, merge or transactional) Roll my own IIS sync service the periodically checks if there is a new version of the web app and downloads it (I hope there are better option than this) Something way better I don’t yet know about (I hope this is the one I need) One of my clients concerns are that the branches are often in very remote areas where everything from technicians to internet is hard to find and very scarce. Any ideas, suggestions, tips etc are welcome. Thanks all

    Read the article

  • DPMS does not work: the monitor is not switched off

    - by bortzmeyer
    I have a monitor which was properly switched off by my Debian PC when unused. I attached it to another machine and, this times, it is never switched off. In /etc/X11/xorg.conf, I have: Section "Monitor" Identifier "Generic Monitor" Option "DPMS" It is recognized when X11 starts: (II) Loading extension DPMS ... (II) VESA(0): DPMS capabilities: StandBy Suspend Off; RGB/Color Display ... (**) Option "dpms" (**) VESA(0): DPMS enabled The operating system is Debian stable "lenny". The graphics card is: 00:02.0 VGA compatible controller: Intel Corporation 82G33/G31 Express Integrate d Graphics Controller (rev 02) (prog-if 00 [VGA controller]) Subsystem: Hewlett-Packard Company Device 2a6f Flags: bus master, fast devsel, latency 0, IRQ 5 Memory at fe900000 (32-bit, non-prefetchable) [size=512K] I/O ports at b080 [size=8] Memory at d0000000 (32-bit, prefetchable) [size=256M] Memory at fe800000 (32-bit, non-prefetchable) [size=1M] Capabilities: [90] Message Signalled Interrupts: Mask- 64bit- Queue=0/0 Enable-Capabilities: [d0] Power Management version 2 X11 is: X.Org X Server 1.4.2 Release Date: 11 June 2008 X Protocol Version 11, Revision 0 Build Operating System: Linux Debian (xorg-server 2:1.4.2-10.lenny2) Current Operating System: Linux ludwigVII 2.6.26-2-686 #1 SMP Sun Jun 21 04:57:3 8 UTC 2009 i686 Build Date: 08 June 2009 09:12:57AM

    Read the article

  • Postgres pgpass windows - not working

    - by Scott
    DB: Postgres 9.0 Client: Windows 7 Server Windows 2008, 64bit I'm trying to connect remotely to a postgres instance for purposes of performing a pg_dump to my local machine. Everything works from my client machine, except that I need to provide a password at the password prompt, and I'd ultimately like to batch this with a script. I've followed the instructions here: http://www.postgresql.org/docs/current/static/libpq-pgpass.html but it's not working. To recap, I've created a file on the client (and tried the server as well): C:/Users/postgres/AppData/postgresql/pgpass.conf, where postgresql is the db user. The file has one line with the following data: *:5432:*postgres:[mypassword] (also tried explicit ip/dbname values, all asterisks, and every combination in between. (I've also tried replacing each '*' with [localhost|myip] and [mydatabasename] respectively. From my client machine, I connect using: pg_dump -h [myip] -U postgres -w [mydbname] [mylocaldumpfile] I'm presuming that I need to provide the '-w' switch in order to ignore password prompt, at which point it should look in the AppData directory on the server. It just comes back with "connection to database failed: fe_sendauth: no password supplied. Any insights are appreciated. As a hack workaround, if there was a way I could tell the windows batch file on my client machine to inject the password at the postgres prompt, that would work as well. Thanks.

    Read the article

  • Using udev to create a character device based on a driver being loaded

    - by SteveCB
    I'm in the process of setting up RAID monitoring for a number of Dell servers that use the PERC 6i integrated card. We're using Nagios at present and the check_megasasctl plugin seems to fit the bill. However, the plugin relies upon the existence of: /dev/megaraid_sas_ioctl_node This device node doesn't exist by default, you have to create it by hand using something like: mknod /dev/megaraid_sas_ioctl_node c 253 0 Now, to make the existence of this device node persistent across reboots, I thought I could write a udev rule, but as usual, I'm missing something. I thought I could create a file such as /etc/udev/rules.d/10-local/rules that contained: DRIVER=="megasas" NAME="megaraid_sas_ioctl_node" MODE="0600" But this doesn't work - no device node after a reboot. Dmesg output indicates the megasas driver is loaded and functional: megasas: 00.00.04.01-RH1 Thu July 10 09:41:51 PST 2008 megasas: 0x1000:0x0060:0x1028:0x1f0c: bus 1:slot 0:func 0 megasas: FW now in Ready state Further, I don't see any means to instruct udev on which type of device node to create: character or block. I suspect I'm failing to understand exactly how udev is meant to work. I realise I could just cheat and run MegaCLI in /etc/rc.local, redirecting output to /dev/null; it creates the megaraid_sas_ioctl_node device node as part of its execution. I just thought using udev rules would be a) cleaner and b) a useful learning exercise. Perhaps I should just dump the above mknod command in /etc/rc.local... So how do I get udev to create the /dev/megaraid_sas_ioctl_node device node based on the presence of the megasas driver? Cheers Steve

    Read the article

  • Group Policy for DNS Server Addresses

    - by John
    This question deals strictly with the methodology of using Active Directory to publish DNS settings and controlling the DNS server address order to domain attached clients. This is not asking about if this is the right function, role, or best use of group policy; but rather, how to make it work. I would like to be able to publish in group policy DNS addresses and possibly control the order in which the client consumes the addresses. I have tried following the information from this site without success. I set the following setting within group policy, but the client never shows the settings within the TCP/IP properties. Computer Configuration/Administrative Templates/Network/DNS Client/DNS Servers I did list them as a single space separated list. For example, the following: 192.168.0.1 192.168.10.3 192.168.34.2 192.168.2.67 192.168.56.99 192.168.99.23 This would be for Windows 7, Windows Server 2003, and Windows Server 2008 clients. I am not sure what I am doing wrong or how to get this to work. Am I missing a setting? Do I need to set something differently?

    Read the article

  • Problems installing Windows service via Group Policy in a domain

    - by CraneStyle
    I'm reasonably new to Group Policy administration and I'm trying to deploy an MSI installer via Active Directory to install a service. In reality, I'm a software developer trying to test how my service will be installed in a domain environment. My test environment: Server 2003 Domain Controller About 10 machines (between XP SP3, and server 2008) all joined to my domain. No real other setup, or active directory configuration has been done apart from things like getting DNS right. I suspect that I may be missing a step in Group Policy that says I need to grant an explicit permission somewhere, but I have no idea where that might be or what it will say. What I've done: I followed the documentation from Microsoft in How to Deploy Software via Group Policy, so I believe all those steps are correct (I used the UNC path, verified NTFS permissions, I have verified the computers and users are members of groups that are assigned to receive the policy etc). If I deploy the software via the Computer Configuration, when I reboot the target machine I get the following: When the computer starts up it logs Event ID 108, and says "Failed to apply changes to software installation settings. Software changes could not be applied. A previous log entry with details should exist. The error was: An operations error occurred." There are no previous log entries to check, which is weird because if it ever actually tried to invoke the windows installer it should log any sort of failure of my application's installer. If I open a command prompt and manually run: msiexec /qb /i \\[host]\[share]\installer.msi It installs the service just fine. If I deploy the software via the User Configuration, when I log that user in the Event Log says that software changes were applied successfully, but my service isn't installed. However, when deployed via the User configuration even though it's not installed when I go to Control Panel - Add/Remove Programs and click on Add New Programs my service installer is being advertised and I can install/remove it from there. (this does not happen when it's assigned to computers) Hopefully that wall of text was enough information to get me going, thanks all for the help.

    Read the article

  • then an error occurred during the login process - Connection Error 233

    - by scott brunner
    We have SQL Server 2008 installed on 64Bit Windows Server 2003. When we try connect to the local SQL Server using SQL Server Management Studio at the console, we get the error: A connection was successfully established with the server, but then an error occurred during the login process. provider: Shared Memory Provider, error: 0 - No process is on the other end of the pipe. When we try TCP from same local SSMS to local server, we get the same error but intead of the pipe message its something like "connection forcibly closed". Now, here is the strange part - we CAN connect to this SQL Server from any other machine on the network using SSMS. - AND - WE CAN'T connect to ANY SQL Server from the problem server. So it seems the SQL Server instance is fine and accepting remote connections. However, the SSMS on that machine will not connect to any SQL Server even remotely. When we try an ADO.NET connection from C# remotely we can connect, run that same code on the console of the trouble server and we get the same errors. How can this be solved?

    Read the article

  • Allowing non-admin users to unstick the print spooler

    - by Reafidy
    I currently have an issue where the print que is getting stuck on a central print server (windows server 2008). Using the "Clear all documents" function does not clear it and gets stuck too. I need non-admin users to be able to clear the print cue from there work stations. I have tried using the following winforms program which I created and allows a user to stop the print spooler, delete printer files in the "C:\Windows\System32\spool\PRINTERS folder" and then start the print spooler but this functionality requires the program to be runs as an administrator, how can I allow my normal users to execute this program without giving them admin privileges? Or is there another way I can allow normal user to clear the print que on the server? Imports System.ServiceProcess Public Class Form1 Private Sub Button1_Click(sender As System.Object, e As System.EventArgs) Handles Button1.Click ClearJammedPrinter() End Sub Public Sub ClearJammedPrinter() Dim tspTimeOut As TimeSpan = New TimeSpan(0, 0, 5) Dim controllerStatus As ServiceControllerStatus = ServiceController1.Status Try If ServiceController1.Status <> ServiceProcess.ServiceControllerStatus.Stopped Then ServiceController1.Stop() End If Try ServiceController1.WaitForStatus(ServiceProcess.ServiceControllerStatus.Stopped, tspTimeOut) Catch Throw New Exception("The controller could not be stopped") End Try Dim strSpoolerFolder As String = "C:\Windows\System32\spool\PRINTERS" Dim s As String For Each s In System.IO.Directory.GetFiles(strSpoolerFolder) System.IO.File.Delete(s) Next s Catch ex As Exception MsgBox(ex.Message) Finally Try Select Case controllerStatus Case ServiceControllerStatus.Running If ServiceController1.Status <> ServiceControllerStatus.Running Then ServiceController1.Start() Case ServiceControllerStatus.Stopped If ServiceController1.Status <> ServiceControllerStatus.Stopped Then ServiceController1.Stop() End Select ServiceController1.WaitForStatus(controllerStatus, tspTimeOut) Catch MsgBox(String.Format("{0}{1}", "The print spooler service could not be returned to its original setting and is currently: ", ServiceController1.Status)) End Try End Try End Sub End Class

    Read the article

  • csvde doesn't import users

    - by The Eighth Ero
    I have a small problem as I'm a server manager beginner, I installed a Domain Controller on my Windows Server 2008, and I created three OUs, now I'm trying to add users to each OU via csvde command, but I get as a result of the operation, without mentioning any errors: > C:\csvde>csvde -i -f List.csv > Connecting to "(null)" > Logging in as current user using SSPI Importing directory from file > "List.csv" Loading entries. > 0 entries modified successfully. Below is the csv file I'm using to add 2 users to "Offshoring1" OU, the domain name is "iado.lan". DN objectClass sAMAccountName sn givenName userPrincipalNAme cn=BB NN,ou=Offshoring1,dc=iado,dc=lan user BB NN BB [email protected] cn=II YY,ou=Offshoring1,dc=iado,dc=lan user II YY II [email protected] and this the csv data as generated by Word 2011 on my mac : DN;objectClass;sAMAccountName;sn;givenName;userPrincipalNAme cn=BB NN,ou=Offshoring1,dc=iado,dc=lan;user;BB;NN;BB;[email protected] cn=II YY,ou=Offshoring1,dc=iado,dc=lan;user;II;YY;II;[email protected] I do use -k option to force import but still no success.

    Read the article

< Previous Page | 772 773 774 775 776 777 778 779 780 781 782 783  | Next Page >