Search Results

Search found 42468 results on 1699 pages for 'default program'.

Page 1554/1699 | < Previous Page | 1550 1551 1552 1553 1554 1555 1556 1557 1558 1559 1560 1561  | Next Page >

  • HTTPS request to a specific load-balanced virtual host (using Shibboleth for SSO)?

    - by Gary S. Weaver
    In one environment, we have three servers load balanced that have a single Tomcat instance on each, fronted by two different Apache virtual hosts. Each of those two virtual hosts (served by all three servers) has its own different load balancer. Internally, the first host (we'll call it barfoo) is served by port 443 (HTTPS) with its cert and the second host (we'll call it foobar) is served by port 1443 (HTTPS). When you hit foobar, it goes to the load balancer which is using IP affinity for that host, so you can easily test login/HTTPS on one of the servers serving foobar, but not the others (because you keep getting that server for the lifetime of the LB session, iirc). In addition, each of the servers are using Shibboleth v2 for authN/SSO, using mod_shib (iirc). So, a normal request to foobar hits the LB, is directed to the 3rd server (and will do that from then on for as long as the LB session lasts), then Apache, then to the Shibboleth SP which looks at the request, makes you login via negotiation with the Shibboleth IdP, then you hit Apache again which in turn hits Tomcat, renders, and returns the response. (I'm leaving out some steps there.) We'd like to hit one of the individual servers (foobar-03.acme.org which we'll say has IP 1.2.3.4) via HTTPS (skipping the load balancer), so we at first try putting this in /etc/hosts: 1.2.3.4 foobar.acme.org But since foobar.acme.org is a secondary virtual host running on 1443, it attempts to get barfoo.acme.org rather than foobar.acme.org at port 1443 and see that the cert for barfoo.acme.org is invalid for this case since it doesn't match the request's host, foobar.acme.org. I thought an ssh tunnel might be easy enough, so I tried: ssh -L 7777:foobar-03.acme.org:1443 [email protected] I tried just hitting https://localhost:7777/webappname in a browser, but when the Shibboleth login is over, it again tries to redirect to barfoo.acme.org, which is the default host for 443, and we get into an infinite redirect loop. I then tried setting up an SSH tunnel with privileged port 443 locally going to 443 of foobar-03.acme.org as the hostname for that virtual host: sudo ssh -L 443:foobar-03.acme.org:1443 [email protected] I also edited /etc/hosts to add: 127.0.0.1 foobar.acme.org This finally worked and I was able to get the browser to hit the individual HTTPS host at https://foobar.acme.org/webappname, bypassing the load balancer. This was a bit of a pain and wouldn't work for everyone, due to the requirement to use the local 443 port and ssh to the server. Is there an easier way to browse to and log into an individual host in this case?

    Read the article

  • HOW TO Convert DVD to iPad(Also converts to iPods)?

    - by goodm
    DVD to iPad Converter (Also converts to iPods) DVD to iPad Converter is the easiest-to-use and fastest DVD to iPad converter for Apple iPad movie and iPad video. It can convert almost any kind of DVD to iPad movie or iPad video format. It is also a powerful DVD to iPad converter with a conversion speed that is much faster than real-time. With this converter, you can use your iPad as a portable DVD player and enjoy your favorite DVDs on your iPad. http://www.softseeking.com/prodail.aspx?proid=83" Features of this DVD to iPad Converter Three Running Modes --In Direct Mode, you can directly click the DVD menu to select the movie you want to rip. This mode is very easy for ripping DVD movies. --In Batch Mode, you can select the DVD titles or chapters you want to rip via a checkbox list. This mode is very easy for batch ripping music DVDs, MTV DVDs and episodic DVDs. --In 1-Click Mode, you just need one click to open a DVD, after which the rest of the task will be done automatically. This is a “designed-for-dummies” mode. Input Types You can convert almost any kind of DVD format to iPad. Output Splitting You can split your output video by DVD chapters and titles. Fully supports MTV DVDs and episodic DVDs. File Size and Quality Adjustment You can customize the output file size and corresponding video quality. Flexible Output Profiles You can easily customize the various video settings such as brightness, bit rate, etc. Language Selection for Subtitles and Audio Track In Direct mode and in Batch mode, you can select the subtitle and audio track language. (In 1-Click mode, the default language is chosen automatically). Video Crop You can crop your video to 16:9, 4:3, full screen, etc. Video Resize You can resize your video. For example, you can set it to "Keep aspect ratio" or "Stretch to fit screen." Other Converts DVD to MP3 audio. Supports Dolby, DTS Surround audio track. Converts to the latest iPhone, 4th generation iPad nano, nano chromatic, 2nd generation iPad touch, and Apple TV.

    Read the article

  • Running an rsync sweep before initializing lsyncd for synchronizing instances on EC2

    - by chrisallenlane
    My company uses several EC2 servers that will scale up and down according to the load we're receiving on our sites at any given moment. For the sake of our discussion here, we're running four instances: master.ourdomain.com - the file syncing "hub" of the webservers www1/www2/www3.ourdomain.com - three webservers which turn on or off as dictated by load I'm using lsyncd to keep all of the webservers in sync, and for the most part, it's working quite well. We're using a two-way syncing scheme, such that each webserver syncs against master, and master syncs against each webserver. Thus, the webservers are kept in sync, even though they aren't syncing against each other directly. I'm having one problem that I'm having a hard time solving,though. It occurs under these circumstances: When changes are made on master (perhaps after we've pushed new code), while some of the redundant webservers are sleeping And then a sleeping webserver wakes-up to absorb load Under that circumstance, I would like the following to happen: First, the newly-awoken webserver should sync its file structure - one way - against master, to bring its web application code up-to-date. Then, and only then, should it begin pushing changes in its file structure back to master. Unfortunately, currently, when a sleeping server is started, when lsyncd starts up, it pushes changes back to master before updating its own codebase, thus overwriting new code with old. Thus, before lsyncd starts, I'd like to be able to synchronize the webservers code against master's, perhaps by running a simple one-way rsync against the two machines. We're running lsyncd v.2, and I've tried to make this happen by using the "bash" configuration options documented in the lsyncd manual. My configuration file looks like this: settings = { logfile = "/home/user/log/lsyncd/log.txt", statusFile = "/home/user/log/lsyncd/status.txt", maxProcesses = 2, nodaemon = false, } bash = { onStartup = "rsync [email protected]:/home/user/www /home/user/www" } sync{ default.rsyncssh, source="/home/user/www/", host="[email protected]", targetdir="/home/user/www/", rsyncOpts="-ltus", excludeFrom="/home/user/conf/lsyncd/exclude" } (I've obviously redacted that file somewhat to protect the identities of the guilty.) Simply put, though, this just isn't working. How else might I approach this problem? I was looking at the --delete-after option in man rsync, but I don't think that does what I'm looking for. Are there any suggestions about how I should approach this problem? Thanks for lending your time and expertise. Chris

    Read the article

  • Managing multiple independant domains with Google Apps

    - by Saif Bechan
    I am currently running a server where I have multiple domains with all of them running there own mail server. My plan is to outsource this whole email service and have Google, or competitor, do this for me. Let me start by telling you the setup I have now and want to migrate to Google. Initial setup I have a main domain where I run my server, and my nameserver. This is an important domain because this holds the connection with all my internal applications. For example log messages, cronjob messages, and virus-scan messages are sent to this domain. This email is also registered at my registrar and I use it to communicate with my ISP. Next I run a few independent websites that all need their independent email addresses. This can be on shared space, I don't mind. 1 Gig will be enough for everything I am going to do. Summary: superdomain.com (which only has a catchall for internal use and communication with my ISP) cars.com (independent) flowers.com (independent) foods.com (independent) I am going to be the admin for all of this. The independent domains don't need there own admin panel, they just need email addresses like info@ support@, etc. I do all the managing and they just send and receive emails using the accounts i give them. All of the websites have there different staff that use the accounts. Tried so far I have registered my superdomain, but I can only add aliases to the main domain. If I make all the other domains aliases the emails from [email protected] and [email protected] will have the same inbox. I want them to be separate. is the only way to achieve this by creating an account for each domain? And if so, is there no way of creating a superdomain account where I can edit all these accounts easily without having to log in 4 different places to get my work done. I have searched the Google help forums, and posted questions but without any results so far. Questions Can anyone please give me some advice on what to do. I currently use the free program Google has.

    Read the article

  • Windows 7 Home hangs at "Welcome" screen

    - by White Phoenix
    I'm asking on behalf of a friend who's currently having problems with his machine. Windows 7 Home 32-bit. He's too far away for me to help by going over to his house - I'm helping him over the Internet. This is his current machine: http://www.newegg.com/Product/Product.aspx?Item=N82E16883227134 The only two changes he made to that machine is to swap out the gfx card for a EVGA GTX 460 and the PSU for a Corsair TX650. Here's what happened: He was playing a computer game (fairly CPU/GPU intensive) and had some music going in the background in foobar while playing. Suddenly, he notices the music stopped playing, so he switches to foobar to try to close it, but it freezes up (window won't respond). So he figures it's just foobar having a bad day and force quits that program. Suddenly, his game won't respond, so he force quits that, then the entire computer just went to crap at that point, so he hits the restart button on his machine. Computer POSTS fine, but now he gets stuck at the Windows "welcome" screen (his account is set to auto-login). HD activity light is solid yellow but he doesn't hear HDD activity. He tried booting into Safe Mode - gets stuck at the "welcome screen". Tried a STartup Repair within Windows 7, it found a few problems, but still gets stuck at welcome. I advised him to boot off the DVD - sfc /scannow found nothing (couldn't use the regular /scannow option; says there's a repair pending, had to use use offbootdir/offwindir command switches). Ran startup repair 3 times - found nothing. My friend runs virus/malware scans on a regular basis, so he's fairly sure it's not that either. Right now I'm having my friend run chkdsk /R on the computer while in this Startup Recovery mode - so far it's caught a few bad sectors. However at this point I'm kinda wondering which way to go if chkdsk doesn't fix it. Quick Google search said someone had success by booting Windows with bootlogging on - some others have success with running the aforemented chkdsk, etc. The fact that Windows cannot even boot into Safe Mode concerns me. While we're waiting for chkdsk /R to finish, are there any other options I can give my friend short of reinstalling Windows 7? He has his data on a separate partition so that's not a major problem (though it'll be an annoyance for him). I suspect his hard drive may be having some issues, but my main concern is getting him back up and running before we start diagnosing the hard drive (I may have him run some sort of SMART test utility later).

    Read the article

  • Jetty interprets JETTY_ARGS as file name

    - by Lena Schimmel
    I'm running Jetty (version "null 6.1.22") on Ubuntu 10.04. It's running fine until I need JSP support. According to several blog posts I need to set the JETTY_ARGS to OPTIONS=Server,jsp. However, if I put this into /etc/default/jetty: JETTY_ARGS=OPTIONS=Server,jsp and restart Jetty via /etc/init.d/jetty stop && /etc/init.d/jetty start, it reports success, but does not accept connections. I notices that it logs something to /usr/share/jetty/logs/out.log: 2012-09-11 11:19:05.110:WARN::EXCEPTION java.io.FileNotFoundException: /var/cache/jetty/tmp/OPTIONS=Server,jsp (No such file or directory) at java.io.FileInputStream.open(Native Method) at java.io.FileInputStream.<init>(FileInputStream.java:137) at java.io.FileInputStream.<init>(FileInputStream.java:96) at sun.net.www.protocol.file.FileURLConnection.connect(FileURLConnection.java:87) at sun.net.www.protocol.file.FileURLConnection.getInputStream(FileURLConnection.java:178) at com.sun.org.apache.xerces.internal.impl.XMLEntityManager.setupCurrentEntity(XMLEntityManager.java:630) at com.sun.org.apache.xerces.internal.impl.XMLVersionDetector.determineDocVersion(XMLVersionDetector.java:189) at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:776) at com.sun.org.apache.xerces.internal.parsers.XML11Configuration.parse(XML11Configuration.java:741) at com.sun.org.apache.xerces.internal.parsers.XMLParser.parse(XMLParser.java:123) at com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser.parse(AbstractSAXParser.java:1208) at com.sun.org.apache.xerces.internal.jaxp.SAXParserImpl$JAXPSAXParser.parse(SAXParserImpl.java:525) at javax.xml.parsers.SAXParser.parse(SAXParser.java:392) at org.mortbay.xml.XmlParser.parse(XmlParser.java:188) at org.mortbay.xml.XmlParser.parse(XmlParser.java:204) at org.mortbay.xml.XmlConfiguration.<init>(XmlConfiguration.java:109) at org.mortbay.xml.XmlConfiguration.main(XmlConfiguration.java:969) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:616) at org.mortbay.start.Main.invokeMain(Main.java:194) at org.mortbay.start.Main.start(Main.java:534) at org.mortbay.jetty.start.daemon.Bootstrap.start(Bootstrap.java:30) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:616) at org.apache.commons.daemon.support.DaemonLoader.start(DaemonLoader.java:177) That is, whatever I put into JETTY_ARGS, it inteprets is as a filename inside /var/cache/jetty/tmp/ and tries to parse that file as XML (or does it parse some other XML and tries to read that file as a DTD? I'm not sure.). This doesn't seem to make any sense to me, especially since that directory is entirely empty. I've verified this with several other Strings, not only OPTIONS=Server,jsp.

    Read the article

  • BIND DNS server (Windows) - Unable to access my local domain from other computers on LAN

    - by Ricardo Saraiva
    I have a BIND DNS server running on my Windows 7 development machine and I'm serving pages with WAMPSERVER. My ideia is to develop some tools (in PHP) for my intranet at work and I want them to be accessible via LAN in this format: http://tools.mycompany.com I've already placed BIND and I can access http://tools.mycompany.com on the machine that holds BIND server, but I cannot access it from other LAN computers. I've done the following on my router: defined static IP's for all LAN computers set Port Forwarding to my server (remember: it serves DNS and Web pages) set DNS server configuration to point to my LAN server On LAN computers, I went to Local Area Network properties and also changed the DNS server IP in order to point to my local DNS server. If it helps, here is my named.conf file: options { directory "c:\windows\SysWOW64\dns\etc"; forwarders {127.0.0.1; 8.8.8.8; 8.8.4.4;}; pid-file "run\named.pid"; allow-transfer { none; }; recursion no; }; logging{ channel my_log{ file "log\named.log" versions 3 size 2m; severity info; print-time yes; print-severity yes; print-category yes; }; category default{ my_log; }; }; zone "mycompany.com" IN { type master; file "zones\db.mycompany.com.txt"; allow-transfer { none; }; }; key "rndc-key" { algorithm hmac-md5; secret "qfApxn0NxXiaacFHpI86Rg=="; }; controls { inet 127.0.0.1 port 953 allow { 127.0.0.1; } keys { "rndc-key"; }; }; ...and a single zone I've defined - file db.mycompany.com.txt: $TTL 6h @ IN SOA tools.mycompany.com. hostmaster.mycompany.com. ( 2014042601 10800 3600 604800 86400 ) @ NS tools.mycompany.com. tools IN A 192.168.1.4 www IN A 192.168.1.4 On the file above 192.168.1.4 is the IP of the local machine inside my LAN. Can someone help me here? I need my web pages to be accessible from other computers inside my LAN using my custom domain name. I've tried on other computers and they can access my server via http://192.168.1.4/, but no able when using http://tools.mycompany.com . Please, consider the following: I'm completely new to BIND I have basic knowledge in Apache configuration Thanks a lot for your help.

    Read the article

  • iptables : how to correctly allow incoming and outgoing traffic for certain ports?

    - by Rubytastic
    Im trying to get incoming and outgoing traffic to be enabled on specific ports, because I block everything at the end of the iptables rules. INPUT and FORWARD reject. What would be the appropiate way to open certain ports for all traffic incoming and outgoing? From docs I found below but one has to really define both lines? iptables -A INPUT -i eth0 -p tcp --dport 22 -m state --state NEW,ESTABLISHED -j ACCEPT iptables -A OUTPUT -o eth0 -p tcp --sport 22 -m state --state ESTABLISHED -j ACCEPT I try to open ports for xmpp service and some other deamons running on server. Rules: *filter # Allow all loopback (lo0) traffic and drop all traffic to 127/8 that doesn't use lo0 -A INPUT -i lo -j ACCEPT -A INPUT -d 127.0.0.0/8 -j REJECT # Accept all established inbound connections -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT # Allow all outbound traffic - you can modify this to only allow certain traffic -A OUTPUT -j ACCEPT # Allow HTTP # Prevent DDOS attacks (http://blog.bodhizazen.net/linux/prevent-dos-with-iptables/) # Disallow HTTPS -A INPUT -p tcp --dport 80 -m state --state NEW -m limit --limit 50/minute --limit-burst 200 -j ACCEPT -A INPUT -m state --state RELATED,ESTABLISHED -m limit --limit 50/second --limit-burst 50 -j ACCEPT -A INPUT -p tcp --dport 443 -j DROP # Allow SSH connections # The -dport number should be the same port number you set in sshd_config -A INPUT -p tcp -s <myip> --dport ssh -j ACCEPT -A INPUT -p tcp -s <myip> --dport 5984 -j ACCEPT -A INPUT -p tcp --dport ssh -j REJECT # Attempt to block portscans # Anyone who tried to portscan us is locked out for an entire day. -A INPUT -m recent --name portscan --rcheck --seconds 86400 -j DROP -A FORWARD -m recent --name portscan --rcheck --seconds 86400 -j DROP # Once the day has passed, remove them from the portscan list -A INPUT -m recent --name portscan --remove -A FORWARD -m recent --name portscan --remove # These rules add scanners to the portscan list, and log the attempt. -A INPUT -p tcp -m tcp --dport 139 -m recent --name portscan --set -j LOG --log-prefix "Portscan:" -A INPUT -p tcp -m tcp --dport 139 -m recent --name portscan --set -j DROP -A FORWARD -p tcp -m tcp --dport 139 -m recent --name portscan --set -j LOG --log-prefix "Portscan:" -A FORWARD -p tcp -m tcp --dport 139 -m recent --name portscan --set -j DROP # Stop smurf attacks -A INPUT -p icmp -m icmp --icmp-type address-mask-request -j DROP -A INPUT -p icmp -m icmp --icmp-type timestamp-request -j DROP -A INPUT -p icmp -m icmp -j DROP # Drop excessive RST packets to avoid smurf attacks -A INPUT -p tcp -m tcp --tcp-flags RST RST -m limit --limit 2/second --limit-burst 2 -j ACCEPT # Don't allow pings through -A INPUT -p icmp -m icmp --icmp-type 8 -j DROP # Log iptables denied calls -A INPUT -m limit --limit 5/min -j LOG --log-prefix "iptables denied: " --log-level 7 # Reject all other inbound - default deny unless explicitly allowed policy -A INPUT -j REJECT -A FORWARD -j REJECT COMMIT

    Read the article

  • How can I reverse mouse movement (X & Y axis) system-wide? (Win 7 x64)

    - by Scivitri
    Short Version I'm looking for a way to reverse the X and Y mouse axis movements. The computer is running Windows 7, x64 and Logitech SetPoint 6.32. I would like a system-level, permanent fix; such as a mouse driver modification or a registry tweak. Does anyone know of a solid way of implementing this, or how to find the registry values to change this? I'll settle quite happily for how to enable the orientation feature in SetPoint 6.32 for mice as well as trackballs. Long Version People seem never to understand why I would want this, and I commonly hear "just use the mouse right-side up!" advice. Dyslexia is not something which can be cured by "just reading things right." While I appreciate the attempts to help, I'm hoping some background may help people understand. I have a user with an unusual form of dyslexia, for whom mouse movements are backward. If she wants to move her cursor left, she will move the mouse right. If she wants the cursor to move up, she'll move the mouse down. She used to hold her mouse upside-down, which makes sophisticated clicking difficult, is terrible for ergonomics, and makes multi-button mice completely useless. In olden times, mouse drivers included an orientation feature (typically a hot-air balloon you dragged upward to set the mouse movement orientation) which could be used to set the relationship between mouse movement and cursor movement. Several years ago, mouse drivers were "improved" and this feature has since been limited to trackballs. After losing the orientation feature she went back to upside-down mousing for a bit, until finding UberOptions, a tweak for Logitech SetPoint, which would enable all features for all pointing devices. This included the orientation feature. And there was much rejoicing. Now her mouse has died, and current Logitech mice require a newer version of SetPoint for which UberOptions has not been updated. We've also seen MAF-Mouse (the developer indicated the version for 64-bit Windows does not support USB mice, yet) and Sakasa (while it works, commentary on the web indicate it tends to break randomly and often. It's also just a running program, so not system-wide.). I have seen some very sophisticated registry hacks. For example, I used to use a hack which would change the codes created by the F1-F12 keys when the F-Lock key was invented and defaulted to screwing my keyboard up. I'm hoping there's a way to flip X and Y in the registry; or some other, similar, system-level tweak out there. Another solution could be re-enabling the orientation feature for mice, as well as trackballs. It's very frustrating that input device drivers include the functionality we desperately need for an accessibilty concern, but it's been disabled in the name of making the drivers more idiot-proof.

    Read the article

  • TFS2010 Hangs “Waiting for Build Agent”

    - by Qpirate
    I have asked this question over on SO the link to the question is here but i am hoping this is a better place to ask it. I have 3 VM's each running the TFS Build Host Service 1 has 1 controller and 1 agent 2 have 2 Build Agents each. Most of the time (7\10 builds) it comes back with the following error message TF215097: An error occurred while initializing a build for build definition BUILD_DEFINITION: There was no endpoint listening at http://MACHINE1:9191/Build/v3.0/Services/Controller/14 that could accept the message. This is often caused by an incorrect address or SOAP action. See InnerException, if present, for more details. and there is no errors when i do get this message. the following is the config file that i have created <configuration> <appSettings> <add key="traceWriter" value="true"/> </appSettings> <system.diagnostics> <switches> <add name="BuildServiceTraceLevel" value="4"/> <add name="API" value="4"/> <add name="Authentication" value="4"/> <add name="Authorization" value="4"/> <add name="Database" value="4"/> <add name="General" value="4"/> <add name="traceLevel" value="4"/> </switches> <trace autoflush="true" indentsize="4"> <listeners> <add name="myListener" type="Microsoft.TeamFoundation.TeamFoundationTextWriterTraceListener,Microsoft.TeamFoundation.Common, Version=10.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" initializeData="c:\logs\TFSBuildServiceHost.exe.log" /> <remove name="Default" /> </listeners> </trace> </system.diagnostics> </configuration> I do have my own custom activities in my build process but this does not seem to be a problem as sometimes the build actually does go. I have tried refreshing the template as some sites suggest. Has anyone come across a solution for this problem? or can anyone tell me how to catch these errors when they happen?

    Read the article

  • Change Logon DPI setting in Windows 8.1

    - by jmc302005
    I love how M$ keeps making decisions for me about how I want my desktop to look. Now they have added per-user dpi settings. The problem this has created is that there is no adjustable dpi setting for the Lock/Logon screen. Let me explain. you can change the dpi setting to be the same across all displays and this does affect the icons and font on the lock/logon screen. However it does not affect any app/program that can run on the lock/logon screen. Ex. I use a 44" flat screen tv for my monitor on my desktop. Big enough for me to sit in my recliner and use my comp. But I don't have a wireless keyboard. And it sucks having the wire from the keyboard running across the floor. Plus I really don't want to keep a keyboard next to me. So I use the on screen keyboard for logging in and quick typing (search, web address, etc.) So the problem is that with the new dpi setup my on screen keyboard takes up nearly half the screen. Does M$ think we are all blind? Oh no I remember they think desktops should look like tablets and phones. I tried looking through the registry to see if I could find a setting for it. In the key HKEY_USERS.DEFAULT\Control Panel\Desktop there is a string value named "LogicalDPIOverride" with a value of -1. I have a feeling this is where I can fix the issue. I tried changing the value to 0 and to 1 with no change in the result. Instead I noticed that after logging out and back in the -1 value was back in the registry. So now M$ has also added a way for us to not be able to change a setting in the registry. They are making it harder and harder for us power users to be able to do anything with the settings in windows. Soon we will all have the same exact Windows with absolutely no customization. ok sorry for the quick rant. The real question here is. How can I change this defualt dpi crap? Can I use the LogPixels string that worked for dpi in Windows 7? Here are 2 Screen shots 1 of the Lock Screen and 1 of the Logon Screen http://i.imgur.com/6RM5ufE.jpg http://i.imgur.com/cnY5bmm.jpg Please any help will be appreciated.

    Read the article

  • Cannot Send Item error in Outlook - permissions to registry?

    - by Tim Alexander
    The issue I am trying to solve is to do with users getting a Cannot Send Item error in Outlook 2007 connecting to Exchange 2007. Basically if there is an image in the email (either one they have pasted in or one from another email in the chain) they get a "Cannot Send Item" error. Initially thought it was a citrix issue but users get it when they RDP to a server as well. Changing the message to Rich Text works 80% of the time but I do not think this is a solution but more of a temporary workaround. After some troubleshooting we found that the error can be fixed by adding the user as a member of the local power users group. of course this is not really a fix. My thoughts were that the ability of a power user to add/remove software may give them more access to the registry which might allow them to get round a restriction that is in place for a normal user. I have tried going through a procmon but the wealth of information is confusing. It initially looked like it may be an Outlook 2007 email security setting but this does not change between power user and normal user (set to 1 in the registry, "Use the security setting from Outlook Security Settings Public Folders"). I am struggling to fine tune my troubleshooting to work out exactly what is blocking it. Has anyone had an experience with an error similar to this? Or are there any tips for trying to track down issues via procmon as I must admit my approach seems somewhat lacking :) EDIT: So I have trawled through the two logs we have from process monitor (one as a power user and one a normal user). annoyingly I can find no obvious difference where something is denied access. There are more access denied events in the normal user log but these are quickly followed by sucessful entries to the same path fractions of a second later. The only thing that does stand out is an access denied to HKCR.html. This does not even appear in the power user version of the log. From what I understand this helps determine the default browser which ties in nicely with the fact that 9 out of 10 times you can send the message as Rich Text. EDIT: Looks like KB2509470 was causing the issue. Not really sure why but when I can work out what it does and why it causes the problem will post here unless anyone beats me to it!

    Read the article

  • Recognizing Dell EquilLogic with Nagios

    - by user3677595
    EDIT: All firmware and models are compatible, that is why nothing is posted about it. Okay, so there will be a lot here, so please bare with me. I've been working on this now for a few hours (reading manuals and such) so I'm not just coming here right out of the blue. I am working on a PRE-EXISTING Nagios server where there are several other existing plugins and checks running and working. Now I want to add another server there to check so I made the following modifications: First and foremost, I added a file to /usr/local/nagios/libexec named: check_equallogic.sh. The permissions are 755, the same as all others. I have chowned to nagios:nagios and in the listing it shows the Owner as Nagios. I then added a command to the commands.cfg file in \usr\local\nagios\etc\objects that shows the following: # 'check_equallogic' command definition define command{ command_name check_equallogic command_line $USER1$/check_equallogic -H $HOSTADDRESS$ -C $ARG1$ -t $ARG2$ $ARG3$ } Following this, I created a file named equallogic.cfg in the objects directory and it contains (more or less): define host{ use linux-server ; Inherit default values from a template host_name 172.16.50.11 ; The name we're giving to this device alias EqualLogic ; A longer name associated with the device address 172.16.50.11 ; IP address of the device contact_groups admins } Check Equallogic Information define service{ use generic-service host_name 172.16.50.11 service_description General Information check_command check_equallogic!public!info } After ensuring that permissions are okay for all files, I restart the nagios service, no errors. When I go into the WebGUI, I get the following errors AFTER the check runs: (Return code of 127 is out of bounds - plugin may be missing) Extra, probably unrelated problem Furthermore, when I log into the EquilLogic server, under Audit logs I get the following error: Level: AUDIT Time: 26/05/2014 3:59:13 PM Member: ps4100-1 Subsystem: agent Event ID: 22.7.1 SNMP packet validation failed, request received from 172.16.10.11 An snmpwalk receives a timeout, whereas others succeed. I will work on importing the MIBs tomorrow. The reason why I am mentioning it is because I want to make sure that it is only a MIB issue for the SNMP. If it is, then ignore this area. I am entirely unsure of what to do here.

    Read the article

  • Switch flooding when bonding interfaces in Linux

    - by John Philips
    +--------+ | Host A | +----+---+ | eth0 (AA:AA:AA:AA:AA:AA) | | +----+-----+ | Switch 1 | (layer2/3) +----+-----+ | +----+-----+ | Switch 2 | +----+-----+ | +----------+----------+ +-------------------------+ Switch 3 +-------------------------+ | +----+-----------+----+ | | | | | | | | | | eth0 (B0:B0:B0:B0:B0:B0) | | eth4 (B4:B4:B4:B4:B4:B4) | | +----+-----------+----+ | | | Host B | | | +----+-----------+----+ | | eth1 (B1:B1:B1:B1:B1:B1) | | eth5 (B5:B5:B5:B5:B5:B5) | | | | | | | | | +------------------------------+ +------------------------------+ Topology overview Host A has a single NIC. Host B has four NICs which are bonded using the balance-alb mode. Both hosts run RHEL 6.0, and both are on the same IPv4 subnet. Traffic analysis Host A is sending data to Host B using some SQL database application. Traffic from Host A to Host B: The source int/MAC is eth0/AA:AA:AA:AA:AA:AA, the destination int/MAC is eth5/B5:B5:B5:B5:B5:B5. Traffic from Host B to Host A: The source int/MAC is eth0/B0:B0:B0:B0:B0:B0, the destination int/MAC is eth0/AA:AA:AA:AA:AA:AA. Once the TCP connection has been established, Host B sends no further frames out eth5. The MAC address of eth5 expires from the bridge tables of both Switch 1 & Switch 2. Switch 1 continues to receive frames from Host A which are destined for B5:B5:B5:B5:B5:B5. Because Switch 1 and Switch 2 no longer have bridge table entries for B5:B5:B5:B5:B5:B5, they flood the frames out all ports on the same VLAN (except for the one it came in on, of course). Reproduce If you ping Host B from a workstation which is connected to either Switch 1 or 2, B5:B5:B5:B5:B5:B5 re-enters the bridge tables and the flooding stops. After five minutes (the default bridge table timeout), flooding resumes. Question It is clear that on Host B, frames arrive on eth5 and exit out eth0. This seems ok as that's what the Linux bonding algorithm is designed to do - balance incoming and outgoing traffic. But since the switch stops receiving frames with the source MAC of eth5, it gets timed out of the bridge table, resulting in flooding. Is this normal? Why aren't any more frames originating from eth5? Is it because there is simply no other traffic going on (the only connection is a single large data transfer from Host A)? I've researched this for a long time and haven't found an answer. Documentation states that no switch changes are necessary when using mode 6 of the Linux interface bonding (balance-alb). Is this behavior occurring because Host B doesn't send any further packets out of eth5, whereas in normal circumstances it's expected that it would? One solution is to setup a cron job which pings Host B to keep the bridge table entries from timing out, but that seems like a dirty hack.

    Read the article

  • How to configure DD-WRT routing table when creating an isolated network segment for PCI C VT compliance

    - by tetranz
    I'm the volunteer support and system admin person at a small private school. We need to setup a PCI compliant Windows PC as a virtual terminal for credit card processing. I've read questionnaire SAQ C-VT and, to quote, this computer needs to be accessed: "via a computer that is isolated in a single location, and is not connected to other locations or systems within your environment (this can be achieved via a firewall or network segmentation to isolate the computer from other systems)" Our setup is as follows: DSL modem from ISP is setup to be a "transparent pipe" with no extra services. That goes into the WAN port of Linksys WRT54-GL running a DD-WRT. The LAN is 192.168.1.x. There are a couple of other WRT54-GL / DD-WRT devices. One is used as a wireless AP and another is a client bridge. To isolate the VT (virtual terminal) machine, I have another DD-WRT device. Its WAN is connected to a port on the 192.168.1.x LAN. The virtual terminal machine is connected to its LAN which is at 192.168.10.x. The SPI Firewall etc is turned on. It's basically the default DD-WRT gateway setup where the "ISP" is our own LAN. That's working. All incoming traffic to the VT machine is blocked, including from our own LAN. The VT can access the internet BUT, and here's the problem, it can also ping any of the computers on the 192.168.1.x LAN. I think I need to stop that. I'm guessing that I could do something with the Static Routing table in the VT machine's DD-WRT device. I need to route anything going to 192.168.1.x other than the gateway which is 192.168.1.1 to 0.0.0.0 or something like that. That's where I'm stuck at the end of my knowledge. Or ... do I need to get yet another DD-WRT so the network is "balanced". Maybe I need to have the internet from the DSL going into a DD-WRT which has only two devices on its LAN i.e., two other DD-WRTs, one for the main LAN and one for the VT. I think that would do but I'd like to avoid the extra cost and complexity if I don't need it. Thanks

    Read the article

  • Fedora 11 System - Failed Hard Drive Removed, and Boot gets GRUB Hard Disk Error

    - by Mindful
    Greetings, I have a machine with a 120GB ATA drive that has what I thought to be non-essential data on it. I also have a 320GB SATA hard drive with the OS/Application/Files (good data I want to keep). My 120GB ATA is failing I believe, as my computer kept slowing to a halt. However, when I move the drive from BIOS my computer will not start, says "GRUB Hard Disk Error". I know that my Fedora system has an LVM setup. I am looking to just remove the 120GB drive from "the mix", and just have one hard drive. How do I recover ? Thank you. I have access to a Linux Live CD right now and can make any changes. However, it won't boot into my OS - it fails. UPDATE: here's my Grub.Conf # grub.conf generated by anaconda # # Note that you do not have to rerun grub after making changes to this file # NOTICE: You have a /boot partition. This means that # all kernel and initrd paths are relative to /boot/, eg. # root (hd1,0) # kernel /vmlinuz-version ro root=/dev/VolGroup00/LogVol00 # initrd /initrd-version.img #boot=/dev/sda1 default=0 timeout=5 splashimage=(hd1,0)/grub/splash.xpm.gz hiddenmenu title Fedora (2.6.30.10-105.2.23.fc11.i686.PAE) root (hd1,0) kernel /vmlinuz-2.6.30.10-105.2.23.fc11.i686.PAE ro root=/dev/VolGroup00/LogVol00 rhgb quiet initrd /initrd-2.6.30.10-105.2.23.fc11.i686.PAE.img title Fedora (2.6.30.9-102.fc11.i686.PAE) root (hd1,0) kernel /vmlinuz-2.6.30.9-102.fc11.i686.PAE ro root=/dev/VolGroup00/LogVol00 rhgb quiet initrd /initrd-2.6.30.9-102.fc11.i686.PAE.img title Fedora (2.6.27.24-170.2.68.fc10.i686.PAE) root (hd1,0) kernel /vmlinuz-2.6.27.24-170.2.68.fc10.i686.PAE ro root=/dev/VolGroup00/LogVol00 rhgb quiet initrd /initrd-2.6.27.24-170.2.68.fc10.i686.PAE.img title Fedora (2.6.27.24-170.2.68.fc10.i686) root (hd1,0) kernel /vmlinuz-2.6.27.24-170.2.68.fc10.i686 ro root=/dev/VolGroup00/LogVol00 rhgb quiet initrd /initrd-2.6.27.24-170.2.68.fc10.i686.img title Fedora (2.6.27.21-170.2.56.fc10.i686) root (hd1,0) kernel /vmlinuz-2.6.27.21-170.2.56.fc10.i686 ro root=/dev/VolGroup00/LogVol00 rhgb quiet initrd /initrd-2.6.27.21-170.2.56.fc10.i686.img title Fedora (2.6.27.19-170.2.35.fc10.i686) root (hd1,0) kernel /vmlinuz-2.6.27.19-170.2.35.fc10.i686 ro root=/dev/VolGroup00/LogVol00 rhgb quiet initrd /initrd-2.6.27.19-170.2.35.fc10.i686.img title Upgrade to Fedora 10 (Cambridge) kernel /upgrade/vmlinuz preupgrade repo=hd::/var/cache/yum/preupgrade stage2=http://chi-10g-1-mirror.fastsoft.net/pub/linux/fedora/linux/releases/10/Fedora/i386/os/images/install.img ks=hd:UUID=f11769ba-29bc-46de-8c40-a949720a438e:/upgrade/ks.cfg initrd /upgrade/initrd.img title Win rootnoverify (hd0,0) chainloader +1

    Read the article

  • nginx + Jetty - thousands of connections stuck in LAST_ACK

    - by virulence
    I have a FreeBSD machine with jails -- two in particular, one that runs nginx and another that runs a Java program that accepts requests via Jetty (embedded mode) Jetty receives upwards of 500 requests/sec constantly and there has been an issue lately where I will constantly have over 60,000 connections in the LAST_ACK state between nginx and jetty. Distribution of all connections (includes some other services, particularly php-fpm) root@host:/root # netstat -an > conns.txt root@host:/root # cat conns.txt | awk '{print $6}' | sort | uniq -c | sort -n 18 LISTEN 112 CLOSING 485 ESTABLISHED 650 FIN_WAIT_2 1425 FIN_WAIT_1 3301 TIME_WAIT 64215 LAST_ACK Distribution of nginx - jetty connections root@host:/root # cat conns.txt | grep '10.10.1.57' | awk '{print $6}' | sort | uniq -c | sort -n 1 3 CLOSE_WAIT 3 LISTEN 18 FIN_WAIT_2 125 ESTABLISHED 64193 LAST_ACK I'd prefer every request to fully close the connection. Clients requests are about 10 minutes apart from each other so connections must be closed. Some of the connections, tcp4 0 0 10.10.1.50.46809 10.10.1.57.9050 LAST_ACK tcp4 0 0 10.10.1.50.46805 10.10.1.57.9050 LAST_ACK tcp4 0 0 10.10.1.50.46797 10.10.1.57.9050 LAST_ACK tcp4 0 0 10.10.1.50.46794 10.10.1.57.9050 LAST_ACK tcp4 0 0 10.10.1.50.46790 10.10.1.57.9050 LAST_ACK tcp4 0 0 10.10.1.50.46789 10.10.1.57.9050 LAST_ACK tcp4 0 0 10.10.1.50.46771 10.10.1.57.9050 LAST_ACK etc.. On Jetty's end I've set maxIdleTime to 2000 -- before this all connections were in ESTABLISHED but they are now LAST_ACK On Jetty's end I've set Connection: close (i.e response.setHeader(HttpHeaders.CONNECTION, HttpHeaderValues.CLOSE);) Jetty never reports a lot of open connections -- always very few. PF/IPFW is not currently being used nginx - reset_timedout_connection is on I cannot figure out how to get nginx or jetty to forcibly close the connection, is this simply something that needs to be fixed in Jetty so that it fully closes the socket after the request finishes? Thanks a lot in advance EDIT: forgot my nginx config for the proxy setup- proxy_pass http://10.10.1.57:9050; proxy_set_header HTTP_X_GEOIP $http_x_geoip; proxy_set_header GEOIP_COUNTRY_CODE $geoip_country_code; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_set_header Connection ""; proxy_http_version 1.1; EDIT2: Forcing Jetty to close the connection via request.getConnection().getEndPoint().close() does nothing -- it's obvious the connection IS being closed (as it's in LAST_ACK) but why isn't it getting past this? Is Nginx keeping the connection open to the backend for some reason?

    Read the article

  • routing through multiple subinterfaces in debian

    - by Kstro21
    my question is as simple as the title, i have a debian 6 , 2 NICs, 3 different subnets in a single interface, just like this: auto eth0 iface eth0 inet static address 192.168.106.254 netmask 255.255.255.0 auto eth0:0 iface eth0:0 inet static address 172.19.221.81 netmask 255.255.255.248 auto eth0:1 iface eth0:1 inet static address 192.168.254.1 netmask 255.255.255.248 auto eth1 iface eth1 inet static address 172.19.216.3 netmask 255.255.255.0 gateway 172.19.216.13 eth0 is conected to a swith with 3 differents vlans, eth1 is conected to a router. No iptables DROP, so, all traffic is allowed. Now, passing the traffic through eth0 is OK, passing the traffic through eth0:0 is OK, but, passing the traffic through eth0:1 is not working, i can ping the ip address of that sub interface from a pc where this ip is the default gateway, but can't get to servers in the subnet of the eth1 interface, the traffic is not passing, even when i set the iptables to log all the traffic in the FORWARD chain and i can see the traffic there, but, the traffic is not really passing. And the funny is i can do any the other way around, i mean, passing from eth1 to eth0:1, RDP, telnet, ping, etc, doing some work with the iptable, i manage to pass some traffic from eth0:1 to eth1, the iptables look like this: iptables -t nat PREROUTING -d 192.168.254.1/32 -p tcp -m multiport --dports 25,110,5269 -j DNAT --to-destination 172.19.216.1 iptables -t nat PREROUTING -d 192.168.254.1/32 -p udp -m udp --dport 53 -j DNAT --to-destination 172.19.216.9 iptables -t nat PREROUTING -d 192.168.254.1/32 -p tcp -m tcp --dport 21 -j DNAT --to-destination 172.19.216.11 iptables -t nat POSTROUTING -s 172.19.216.0/24 -d 172.19.221.80/29 -j SNAT --to-source 172.19.221.81 iptables -t nat POSTROUTING -s 172.19.216.0/24 -d 192.168.254.0/29 -j SNAT --to-source 192.168.254.1 iptables -t nat POSTROUTING -s 172.19.216.0/24 -o eth0 -j SNAT --to-source 192.168.106.254 dong this is working, but,it is really a headache have to map each port with the server, imagine if i move the service from server, so, now i have doubts: can debian route through multiple subinterfaces?? exist a limit for this?? if not, what i'm doing wrong when i have the same setup with other subnets and it is working ok?? without the iptables rules in the nat, it doesn't work thanks and i hope good comments/answers

    Read the article

  • File copying software to do this kind of work... in Windows 7 32 bit

    - by Senthil
    I need a software (Windows 7 32bit) to help me in this process: I have my documents, music, video clips, movies, pictures in my hard disk. These will not be scattered around the system. But will be inside C:\Senthil\ At the end of every week, I want to plug in an external hard disk and run a software that should make sure whatever is inside C:\Senthil\ is also present in the external disk. Files deleted from C:\Senthil\ should be deleted there, and new files should be copied etc... at the end of the process, every bit inside the source folder in my internal disk should be inside my external disk. A couple of important requirements and points: I do NOT need multiple versions or historic versions. I don't need the previous versions of my files. I only want the latest copy to be present in my "backup". Incremental backup makes sense. If files were not touched since the last backup, it need not copy. The size of my folder will run into GBs and in a year or two will go into TBs. But I will make sure the size of the external HDD is equal to or bigger than my source folder. I do not want it to run automatically because when I accidentally delete a file in my source, it will delete the one in the backup (I know this is why we have versioning facilities). I just want to be able to run it manually so that I am in control of when the backup is made and what is backed up and I should be able to pick something from the backup and restore it to the source folder in the above situation. Is there any software that will let me do exactly this? I don't want any other "smart" facility of the software to interfere with this process. I know what I want and the software can keep its smartness to itself :D The main reason I am asking this question is, I am a software developer and I can write this software myself. But I am a little constrained by time at the moment and I want to know if there is an existing program that can do this. Kindly don't worry about earthquakes or fire or snowstorms and bring up the "in case of a natural disaster your backup will also be in the damage zone and will be lost" argument because: I will have bigger things to worry about than my holiday memories. I don't think I will digitally store any life-ruining documents. This backup is only to avoid the inconvenience of obtaining a new copy of stuff that I have. Not to protect them against the end of the world. I am more worried about power surges in my area frying my system, hard disk failure, children who merrily hit Delete or teens who hit Shift + Delete or myself getting a little careless at times! In short: Is there a file/folder syncing software that listens to what I say and doesn't try to act smart? Please forgive me if I sound arrogant :)

    Read the article

  • 40k Event Log Errors an hour Unknown Username or bad password

    - by ErocM
    I am getting about 200k of these an hour: An account failed to log on. Subject: Security ID: SYSTEM Account Name: TGSERVER$ Account Domain: WORKGROUP Logon ID: 0x3e7 Logon Type: 4 Account For Which Logon Failed: Security ID: NULL SID Account Name: administrator Account Domain: TGSERVER Failure Information: Failure Reason: Unknown user name or bad password. Status: 0xc000006d Sub Status: 0xc0000064 Process Information: Caller Process ID: 0x334 Caller Process Name: C:\Windows\System32\svchost.exe Network Information: Workstation Name: TGSERVER Source Network Address: - Source Port: - Detailed Authentication Information: Logon Process: Advapi Authentication Package: Negotiate Transited Services: - Package Name (NTLM only): - Key Length: 0 This event is generated when a logon request fails. It is generated on the computer where access was attempted. The Subject fields indicate the account on the local system which requested the logon. This is most commonly a service such as the Server service, or a local process such as Winlogon.exe or Services.exe. The Logon Type field indicates the kind of logon that was requested. The most common types are 2 (interactive) and 3 (network). The Process Information fields indicate which account and process on the system requested the logon. The Network Information fields indicate where a remote logon request originated. Workstation name is not always available and may be left blank in some cases. The authentication information fields provide detailed information about this specific logon request. - Transited services indicate which intermediate services have participated in this logon request. - Package name indicates which sub-protocol was used among the NTLM protocols. - Key length indicates the length of the generated session key. This will be 0 if no session key was requested. On my server... I changed my adminstrative username to something else and since then I've been inidated with these messages. I found on http://technet.microsoft.com/en-us/library/cc787567(v=WS.10).aspx that the 4 means "Batch logon type is used by batch servers, where processes may be executing on behalf of a user without their direct intervention." which really doesn't shed any light on it for me. I checked the services and they are all logging in as local system or network service. Nothing for administrator. Anyone have any idea how I tell where these are coming from? I would assume this is a program that is crapping out... Thanks in advance!

    Read the article

  • How do I (robustly) remotely execute tasks on Windows workstations in a domain?

    - by Zac B
    I'm not even sure if "robustly" is a word. Anyway. Context: We have a few hundred Windows 7 workstations on a LAN. We use AD/GPO management pretty heavily, but there are a lot of periodic and/or manual maintenance tasks we need to do that can't be done via GPO/scheduled task. For example, say I want to execute program X (which runs silently, in the background, and doesn't bother the user) on workstation Y, or say I want to execute task A on a workstation group B either on a schedule or on demand. Kicking the users off of their computers to do this (i.e. using RDP) is a no-no, and doesn't work on groups anyway. Question: What's the best way to do this that is robust enough that, after setup, I could give it to beginner support people (read: people who are phobic of the command line, and get confused with GUI interfaces more complicated than Firefox)? I'm a competent programmer, and, if there is a robust set of tools or framework out there for this type of task, I'd consider hacking something together myself if it didn't take too long. If there's some combination of tools or techniques that others use to make remote-workstation-administration doable by beginners, I have yet to find it. For those who care about the "why": I'm midlevel IT, and was told to implement a remote management solution that allows arbitrary/scheduled remote execution, with confirmation that programs actually ran remotely, and the ability to view what they returned. "Why?" I asked, "Can't I just use PsExec and the task scheduler on a dispatcher machine?" "No," I was told, "'Joe' the second-week tech is going to be in charge of this one, and he needs something simple with a GUI." What I've tried: I've played with making a bunch of one-clickable "transfer files to remote computer and run them with PsExec" batch/VB scrips, but those tend to break down and don't easily support running on customizable groups. I've played a little bit with the Windows version of Puppet, but it doesn't support arbitrary-time remote execution (it's ability to group computers into a tree/node structure is really nice though). I've used an older version of Altiris, and, while it does a lot of what I want, it's interface is awful, it's slow, crashes a lot, and is probably too expensive for management. SwiftWater's DMS solution does some of what I want, but it's very underdeveloped, closed-source (not a deal breaker but not ideal), and I get the impression that support and reliability are lacking.

    Read the article

  • amazon ec2 ubuntu with gitlab and nginx - cant load?

    - by thebluefox
    Ok, so I've spooled up an Amazon EC2 server running Ubuntu, and then followed the instructions below to install GitLab; http://doc.gitlab.com/ce/install/installation.html The only step I've not been able to complete is running the following check on the status; sudo -u git -H bundle exec rake gitlab:check RAILS_ENV=production I get the following error; rake aborted! Errno::ENOMEM: Cannot allocate memory - whoami Which I presume is becuase my EC2 is just running a free tier setup, so isn't that well spec'd. Regardless, I've been trying to access this through my browser. I've set up the elastic IP and pointed my domain at it (for the purpose of this, lets say its git.mydom.co.uk). Doing a whois on this domain shows me its pointing to the right place. For some reason though, I get the "Oops, Chrome could not connect to git.mydom.co.uk". Now - for a period of time I was getting the Nginx holding page (telling me I still needed to perform configuration). This though disappeared after removing the default file from /etc/nginx/sites-enabled/ (after reading this could be issue on a troubleshooting page). Since then, I've had nothing, even when I symlinked the file back in from /sites-available. I've tried changing the owner of the git.mydom.co.uk file sat inside /sites-enabled and /sites-available to www-data, as suggested here, but I could only change the permission of the file in /sites-available, and not the symlinked one in /sites-enabled. The content of this file is as follows; upstream gitlab { server unix:/home/git/gitlab/tmp/sockets/gitlab.socket; } server { listen *:80 default_server; # e.g., listen 192.168.1.1:80; In most cases *:80 is a good idea server_name git.mydom.co.uk; # e.g., server_name source.example.com; server_tokens off; # don't show the version number, a security best practice root /home/git/gitlab/public; # Increase this if you want to upload large attachments # Or if you want to accept large git objects over http client_max_body_size 20m; # individual nginx logs for this gitlab vhost access_log /var/log/nginx/gitlab_access.log; error_log /var/log/nginx/gitlab_error.log; location / { # serve static files from defined root folder;. # @gitlab is a named location for the upstream fallback, see below try_files $uri $uri/index.html $uri.html @gitlab; } All the paths mentioned in here look ok...I'm about at the end of my knowledge now!

    Read the article

  • Creating a really public Windows network share

    - by Timur Aydin
    I want to create a shared folder under Windows (actually, Windows XP, Vista, and Win 7) which can be mounted from a linux system without prompting for a username/password. But before attempting this, I first wanted to establish that this works between two Windows 7 machines. So, on machine A (The server that will hold the public share), I created a folder and set its permissions such that Everyone has read/write access. Then I visited Control Panel - Network and Sharing Center - Advanced Sharing Settings and then selected "Turn off password protected sharing". Then, on machine B (The client that wants to access the public share with no username/password prompt), I tried to "map network driver" and I was immediately prompted by a password prompt. Some search on google suggested changing "Acconts: Limit local account use of blank passwords to console logon only" to "Disabled". Tried that, no luck, still getting username/password prompt. If I enter the username/password, I am not prompted for it again and can use the share as long as the session is active. But still, I really need to access the share without any username/password transaction whatsoever and this is not just a convenience related thing. Here is the actual reason: The device that will access this windows network share is an embedded system running uclinux. It will mount this share locally and then play media files. Its only user interface is a javascript based web page. So, if there is going to be any username/password transaction, I would have to ask the user to enter them over the web page, which will be ridiculously insecure and completely exposed to packet sniffing. After hours of doing experiments, I have found one way to make this happen, but I am not really very fond of it... I first create a new user (shareuser) and give it a password (sharepass). Then I open Group Policy Editor and set "Deny log on locally" to "A\shareuser". Then, I create a folder on A and share it so that shareuser has Read access to it. This way, shareuser cannot login to A, but can access the shared folder. And, if someone discovers the shareuser/sharepass through network sniffing, they can just access the shared folder, but can't logon to A. The same thing can be achieved by enabling the Guest user and then going to Group Policy Editor and deleting the "Guest" from the "Deny access to this computer from the network" setting. Again, Guest can mount the public share, but logging in to A as Guest won't be possible, because Guest is already not allowed to log in by default. So my question would be, how can I create a network share that is truly public, so that it can be mounted from a linux machine without requiring a password? Sorry for the long question, but I wanted to explain the reason for really needing this...

    Read the article

  • Proxy Error 502 "Reason: Error reading from remote server" with Apache 2.2.3 (Debian) mod_proxy and Jetty 6.1.18

    - by Martin
    Apache is receiving requests at port :80 and proxying them to Jetty at port :8080 The proxy server received an invalid response from an upstream server The proxy server could not handle the request GET /. My dilemma: Everything works fine normally (fast requests, few seconds or few tens of seconds long requests are processed ok). Problems occur when request processing takes long (few minutes?). If I issue request instead directly to Jetty at port :8080 the request is processed OK. So problem is likely to sit somewhere between Apache and Jetty where I am using mod_proxy. How to solve this? I have already tried some "tricks" related to KeepAlive settings, without luck. Here is my current configuration, any suggestions? #keepalive Off ## I have tried this, does not help #SetEnv force-proxy-request-1.0 1 ## I have tried this, does not help #SetEnv proxy-nokeepalive 1 ## I have tried this, does not help #SetEnv proxy-initial-not-pooled 1 ## I have tried this, does not help KeepAlive 20 ## I have tried this, does not help KeepAliveTimeout 600 ## I have tried this, does not help ProxyTimeout 600 ## I have tried this, does not help NameVirtualHost *:80 <VirtualHost _default_:80> ServerAdmin [email protected] ServerName www.mydomain.fi ServerAlias mydomain.fi mydomain.com mydomain www.mydomain.com ProxyRequests On ProxyVia On <Proxy *> Order deny,allow Allow from all </Proxy> ProxyRequests Off ProxyPass / http://www.mydomain.fi:8080/ retry=1 acquire=3000 timeout=600 ProxyPassReverse / http://www.mydomain.fi:8080/ RewriteEngine On RewriteCond %{SERVER_NAME} !^www\.mydomain\.fi RewriteRule /(.*) http://www.mydomain.fi/$1 [redirect=301L] ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel warn CustomLog /var/log/apache2/access.log combined ServerSignature On </VirtualHost> Here is also the debug log from a failing request: 74.125.43.99 - - [29/Sep/2010:20:15:40 +0300] "GET /?wicket:bookmarkablePage=newWindow:com.mydomain.view.application.reports.SaveReportPage HTTP/1.1" 502 355 "https://www.mydomain.fi/?wicket:interface=:0:2:::" "Mozilla/5.0 (Windows; U; Windows NT 6.1; fi; rv:1.9.2.10) Gecko/20100914 Firefox/3.6.10" [Wed Sep 29 20:20:40 2010] [error] [client 74.125.43.99] proxy: error reading status line from remote server www.mydomain.fi, referer: https://www.mydomain.fi/?wicket:interface=:0:2::: [Wed Sep 29 20:20:40 2010] [error] [client 74.125.43.99] proxy: Error reading from remote server returned by /, referer: https://www.mydomain.fi/?wicket:interface=:0:2:::

    Read the article

  • "log on as a batch job" user rights removed by what GPO?

    - by LarsH
    I am not much of a server administrator, but get my feet wet when I have to. Right now I'm running some COTS software on a Windows 2008 Server machine. The software installer creates a few user accounts for running its processes, and then gives those users the right to "log on as a batch job". Every so often (e.g. yesterday at 2:52pm and this morning at 7:50am), those rights disappear. The software then stops working. I can verify that the user rights are gone by using secedit /export /cfg e:\temp\uraExp.inf /areas USER_RIGHTS and I have a script that does this every 30 seconds and logs the results with a timestamp, so I know when the rights disappear. What I see from the export is that in the "good" state, i.e. after I install the software and it's working correctly, the line for SeBatchLogonRight from the secedit export includes the user accounts created by the software. But every few hours (sometimes more), those user accounts are removed from that line. The same thing can be seen by using the GUI tool Local Security Policy > Security Settings > Local Policies > User Rights Assignment > Log on as a batch job: in the "good" state, that policy includes the needed user accounts, and in the bad state, the policy does not. Based on the above-mentioned logging script and the timestamps at which the user rights are being removed, I can see clearly that some GPOs are causing the change. The GPO Operational log shows GPOs being processed at exactly the right times. E.g.: Starting Registry Extension Processing. List of applicable GPOs: (Changes were detected.) Local Group Policy I have run GPOs on demand using gpupdate /force, and was able to verify that this caused the User Rights to be removed. We have looked over local group policies till our eyes are crossed, trying to figure out which one might be stripping these User Rights to "log on as a batch job." We have not configured any local group policies on this machine, that we know of; so is there a default local group policy that might typically do such a thing? Are there typical domain policies that would do this? I have been working with our IT staff colleagues to troubleshoot the problem, but none of them are really GPO experts... They wear many hats, and they do what they need to do in order to keep most things running. Any suggestions would be greatly appreciated!

    Read the article

< Previous Page | 1550 1551 1552 1553 1554 1555 1556 1557 1558 1559 1560 1561  | Next Page >