Search Results

Search found 15438 results on 618 pages for 'static allocation'.

Page 548/618 | < Previous Page | 544 545 546 547 548 549 550 551 552 553 554 555  | Next Page >

  • What the best way to achieve RPO of zero and lowest possible RTO (less than 15 minutes) with SQL 2008 R2?

    - by Adrian Hope-Bailie
    We are running a payments (EFT transaction processing) application which is processing high volumes of transactions 24/7 and are currently investigating a better way of doing DB replication to our disaster recovery site. Our current and previous strategies have included using both DoubleTake and Redgate to replicate data to a warm stand-by. DoubleTake is the supported solution from the payments software vendor however their (DoubleTake's) support in South Africa is very poor. We had a few issues and simply couldn't ever resolve them so we had to give up on DoubleTake. We have been using Redgate to manually read the data from the primary site (via queries) and write to the DR site but this is: A bad solution Getting the software vendor hot and bothered whenever we have support issues as it has a tendency to interfere with the payment application which is very DB intensive. We recently upgraded the whole system to run on SQL 2008 R2 Enterprise which means we should probably be looking at using some of the built-in replication features. The server has 2 fairly large databases with a mixture of tables containing highly volatile transactional data and pretty static configuration data. Replication would be done over a WAN link to a separate physical site and needs to achieve the following objectives. RPO: Zero loss - This is transactional data with financial impact so we can't lose anything. RTO: Tending to zero - The business depends on our ability to process transactions every minute we are down we are losing money I have looked at a few of the other questions/answers but none meet our case exactly: SQL Server 2008 failover strategy - Log shipping or replication? How to achieve the following RTO & RPO with logshipping only using SQL Server? What is the best of two approaches to achieve DB Replication? My current thinking is that we should use mirroring but I am concerned that for RPO:0 we will need to do delayed commits and this could impact the performance of the primary DB which is not an option. Our current DR process is to: Stop incoming traffic to the primary site and allow all in-flight transaction to complete. Allow the replication to DR to complete. Change network routing to route to DR site. Start all applications and services on the secondary site (Ideally we can change this to a warmer stand-by whereby the applications are already running but not processing any transactions). In other words the DR database needs to, as quickly as possible, catch up with primary and be ready for processing as the new primary. We would then need to be able to reverse this when we are ready to switch back. Is there a better option than mirroring (should we be doing log-shipping too) and can anyone suggest other considerations that we should keep in mind?

    Read the article

  • Ubuntu getting wrong hostname from DHCP

    - by sam
    When provisioning new Ubuntu Precise (12.04) servers, the hostname they're getting seems to be generated from the DNS search path, not a reverse lookup on the hostname. Take the following configuration BIND is configured with the hostname, and reverse name Normal zone $TTL 600 $ORIGIN srv.local.net. @ IN SOA ns0.local.net. hostmaster.local.net. ( 2014082101 10800 3600 604800 600 ) @ IN NS ns0.local.net. @ IN MX 5 mail.local.net. my-new-server IN A 10.32.2.30 And reverse @ IN SOA ns0.local.net. hostmaster.local.net. ( 2014082101 10800 3600 604800 600 ) @ IN NS ns0.local.net. $ORIGIN 32.10.in-addr.arpa. 30.2 IN PTR my-new-server.srv.local.net. Then DHCPD is configured to hand out static leases based on mac addresses like so subnet 10.32.2.0 netmask 255.255.254.0 { option subnet-mask 255.255.254.0; option routers 10.32.2.1; option domain-name-servers 10.32.2.1; option domain-name "util.of1.local.net of1.local.net srv.local.net"; site-option-space "pxelinux"; option pxelinux.magic f1:00:74:7e; if exists dhcp-parameter-request-list { option dhcp-parameter-request-list = concat(option dhcp-parameter-request-list,d0,d1,d2,d3); } group { option pxelinux.configfile "pxelinux.cfg/pxeboot"; host my-new-server { fixed-address my-new-server.srv.local.net; hardware ethernet aa:aa:aa:bb:bb:bb; } } } So the hostname should be my-new-server.srv.local.net, however when building a Ubuntu 12.04 node, the hostname ends up as my-new-server.util.of1.local.net When building Lucid (10.04) hosts, the hostname will be correct, it's only on Precise/12.04 nodes we have the problem. Doing a normal and reverse lookup on the host and IP returns the correct result Sams-MacBook-Pro:~ sam$ host my-new-server my-new-server.srv.local.net has address 10.32.2.30 Sams-MacBook-Pro:~ sam$ host my-new-server.srv.local.net my-new-server.srv.local.net has address 10.32.2.30 Sams-MacBook-Pro:~ sam$ host 10.32.2.30 30.2.32.10.in-addr.arpa domain name pointer my-new-server.srv.local.net. The contents of the hosts file is incorrect too 127.0.0.1 localhost 127.0.1.1 my-new-server.util.of1.local.net of1.local.net srv.local.net my-new-server So it looks like when it creates the hosts file, it puts the entire contents of the DNS search path into the local address so the FQDN according to the server is the short hostname as defined, then the first domain in the search path. Is there a way to get around this behaviour, or fix this so it gets the hostname correctly? It's picking up the first part of the hostname, then the rest is wrong.

    Read the article

  • amazon ec2 ubuntu with gitlab and nginx - cant load?

    - by thebluefox
    Ok, so I've spooled up an Amazon EC2 server running Ubuntu, and then followed the instructions below to install GitLab; http://doc.gitlab.com/ce/install/installation.html The only step I've not been able to complete is running the following check on the status; sudo -u git -H bundle exec rake gitlab:check RAILS_ENV=production I get the following error; rake aborted! Errno::ENOMEM: Cannot allocate memory - whoami Which I presume is becuase my EC2 is just running a free tier setup, so isn't that well spec'd. Regardless, I've been trying to access this through my browser. I've set up the elastic IP and pointed my domain at it (for the purpose of this, lets say its git.mydom.co.uk). Doing a whois on this domain shows me its pointing to the right place. For some reason though, I get the "Oops, Chrome could not connect to git.mydom.co.uk". Now - for a period of time I was getting the Nginx holding page (telling me I still needed to perform configuration). This though disappeared after removing the default file from /etc/nginx/sites-enabled/ (after reading this could be issue on a troubleshooting page). Since then, I've had nothing, even when I symlinked the file back in from /sites-available. I've tried changing the owner of the git.mydom.co.uk file sat inside /sites-enabled and /sites-available to www-data, as suggested here, but I could only change the permission of the file in /sites-available, and not the symlinked one in /sites-enabled. The content of this file is as follows; upstream gitlab { server unix:/home/git/gitlab/tmp/sockets/gitlab.socket; } server { listen *:80 default_server; # e.g., listen 192.168.1.1:80; In most cases *:80 is a good idea server_name git.mydom.co.uk; # e.g., server_name source.example.com; server_tokens off; # don't show the version number, a security best practice root /home/git/gitlab/public; # Increase this if you want to upload large attachments # Or if you want to accept large git objects over http client_max_body_size 20m; # individual nginx logs for this gitlab vhost access_log /var/log/nginx/gitlab_access.log; error_log /var/log/nginx/gitlab_error.log; location / { # serve static files from defined root folder;. # @gitlab is a named location for the upstream fallback, see below try_files $uri $uri/index.html $uri.html @gitlab; } All the paths mentioned in here look ok...I'm about at the end of my knowledge now!

    Read the article

  • Virtualbox - routing subnet to bridge adapters

    - by user42384
    Hello, I have set up a Debian Lenny box with 3 vbox Lenny machines running eth0 of the host in bridged mode (on virtualbox 3.1.6). When testing in my local LAN, this all worked perfectly well and traffic flowed to and from the IPs of the virtual machines as it should. However, now that it's in its co-lo home, the networking setup is a bit different, and I'm unable to get traffic to flow to the vboxes properly. Specifically, the host has its own Primary IP, and I have a separate subnet of 8 (6 usable) IPs routed to the box for use by the vboxes. So, eth0 on host is: Machine IP: 2x.x.x.137 Gateway IP: 2x.x.x.138 Subnet Msk: 255.255.255.252 Subnet for vboxes is Subnet: 2x.x.x.240/29 Netmask: 255.255.255.248 vbox1 is configured to 2x.x.x.241 on eth0 as follows: auto eth0 iface eth0 inet static address 2x.x.x.241 netmask 255.255.255.248 Setting up a virtual interface (eth0:0) on the host with one of these subnet IPs allows me to ping to that address only from vbox1, and it allows me to ping vbox1 from the host. I can also ping that virtual interface perfectly well from outside, so the IPs are definitely landing at my machine. It seems I'm missing some sort of routing instruction either on the host or vbox1 to get traffic moving between the subnet and the default gateway, but I can't seem to figure out what it should be, or what glaringly obvious thing i'm missing. Most of my obvious attempts (the gw of eth0, the ip of eth0) were rejected by route command with SIOCADDRT: No such device (eg - i can't find it). I tried setting vbox1 to bridge on eth0:0, but this was not an acceptable device name and VBoxHeadless refused to start. The physical machine does have an unused physical NIC at eth1 that can be used if necessary for something or other. Host machine is running iptables configured by ferm, have experimented with it allowing forwarding for that subnet, but I wouldn't have thought this was necessary given the nature of the virtualbox devices (nor did it actually work). Clearing out all of these rules for a blank iptables set does not resolve the issue. (you can see ferm generated iptables at http://codedumper.com/ojaze) Thanks for any help you can give... Patrick

    Read the article

  • Ubuntu 11.10 firewall/gateway - no client internet access

    - by Siriss
    I have read many other posts but cannot figure this out. eth0 is my external connected to a Comcast modem. The server has internet access with no issues. eth1 is internal and running DHCP for the clients. I have DHCP working just fine, all my clients can get an IP and ping the server but they cannot access the internet. I am using ISC-DHCP-SERVER and have set /etc/default/isc-dhcp-server to INTERFACE="eht1" Here is my dhcpd.conf file located in /etc/dhcp/dhcpd.conf ddns-update-style interim; ignore client-updates; subnet 10.0.10.0 netmask 255.255.255.0 { range 10.0.10.10 10.0.10.200; option routers 10.0.10.2; option subnet-mask 255.255.255.0; option domain-name-servers 208.67.222.222, 208.67.220.220; #OpenDNS # option domain-name "example.com"; default-lease-time 21600; max-lease-time 43200; authoritative; } I have made the *net.ipv4.ip_forward=1* change in /etc/sysctl.conf here is my interfaces file: auto lo iface lo inet loopback auto eth0 iface eth0 inet dhcp iface eth1 inet static address 10.0.10.2 netmask 255.255.255.0 network 10.0.10.0 auto eth1 And finally- here is my iptables.conf file: # Firewall configuration written by system-config-firewall # Manual customization of this file is not recommended. *nat :PREROUTING ACCEPT [0:0] :OUTPUT ACCEPT [0:0] :POSTROUTING ACCEPT [0:0] -A POSTROUTING -s 10.0.10.0/24 -o eth0 -j MASQUERADE #-A PREROUTING -i eth0 -p tcp --dport 59668 -j DNAT --to-destination 10.0.10.2:59668 COMMIT *filter :INPUT ACCEPT [0:0] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [0:0] -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT -A INPUT -p icmp -j ACCEPT -A INPUT -i lo -j ACCEPT -A INPUT -i eth1 -j ACCEPT -A INPUT -m state --state NEW -m tcp -p tcp --dport 22 -j ACCEPT -A INPUT -m state --state NEW -m tcp -p tcp --dport 80 -j ACCEPT -A INPUT -m state --state NEW -m tcp -p tcp --dport 443 -j ACCEPT -A INPUT -m state --state NEW -m tcp -p tcp --dport 53 -j ACCEPT -A INPUT -m state --state NEW -m udp -p udp --dport 53 -j ACCEPT -A FORWARD -s 10.0.10.0/24 -o eth0 -j ACCEPT -A FORWARD -d 10.0.10.0/24 -m state --state ESTABLISHED,RELATED -i eth0 -j ACCEPT -A FORWARD -p icmp -j ACCEPT -A FORWARD -i lo -j ACCEPT -A FORWARD -i eth1 -j ACCEPT #-A FORWARD -i eth0 -m state --state NEW -m tcp -p tcp -d 10.0.10.2 --dport 59668 -j ACCEPT -A INPUT -j REJECT --reject-with icmp-host-prohibited -A FORWARD -j REJECT --reject-with icmp-host-prohibited COMMIT I am completely stuck. I cannot figure out why the clients cannot access the internet. Am I missing a service? Is a service not running? Any help would be greatly appreciated. I tried to be as thorough as possible but please let me know if I have missed something. Thank you!

    Read the article

  • Looking for personal scheduling software / todo list with rather particular requirements

    - by Cthulhu
    I've been scouring the web for a couple of (my boss') hours, looking for a piece of software that can organize my tasks in two ways. First, I have a list of bullet points / todo items I can do at any given time. Think of stuff like solve issue X, ask X about Y, write documentation about Z, etcetera. Second, I have a number of running projects I'd like to organize better, as in schedule for a certain part of a day of the week. Ideally (I think), my day would be organized as 50% spent on projects and 50% on the other small things. Now, I don't like most calendar applications (such as Outlook & friends), their UI is too 'official', not really easy to move stuff around (in my experience). I don't like most todo lists either, too static and things. I like new, fast and hip software. I've looked at GTD versions of Tiddlywiki, and I like mGSD for one particular feature. You can make lists of tasks and basically give them one of three statusses - Now (nothing required, you can do it right away), Waiting (you need someone or something before you can work on this), or the most gratifying of all, Done. I like that feature because it's a simple todo list, but indicates more accurately the things you can do right now and the things you depend on someone else for to do. Anyways, that's just a small aspect of that program - most of the other things in there I can't find a particularly good use for. If there's something like that (maybe something that works even snappier, cleaner UI), combined with an easy to use bit of scheduling software (optionally separated into two applications, but preferrably not), I think I'd like that. (Besides something like that, I also use several instances of Trac to monitor tasks and bugs and things for the various clients and projects I have to serve, and TaskCoach to monitor the amount of time I spend on each task / each client. An easy / low-maintenance time tracking software would be neat too) Of course, the software has to be free to use. I don't like shareware, trials, limited software and the like. I could develop my own too, but I'm lazy like that and there's a dozen other projects I'd like to do in my free time (neither of which I actually do). Edit: I like David Seah's printable CEO stuff, if something like that (with some video game / instant achievement / gratification) exists in software, it'd be awesome.

    Read the article

  • route http and ssh traffic normally, everything else via vpn tunnel

    - by Normadize
    I've read quite a bit and am close, I feel, and I'm pulling my hair out ... please help! I have an OpenVPN cliend whose server sets local routes and also changes the default gw (I know I can prevent that with --route-nopull). I'd like to have all outgoing http and ssh traffic via the local gw, and everything else via the vpn. Local IP is 192.168.1.6/24, gw 192.168.1.1. OpenVPN local IP is 10.102.1.6/32, gw 192.168.1.5 OpenVPN server is at {OPENVPN_SERVER_IP} Here's the route table after openvpn connection: # ip route show table main 0.0.0.0/1 via 10.102.1.5 dev tun0 default via 192.168.1.1 dev eth0 proto static 10.102.1.1 via 10.102.1.5 dev tun0 10.102.1.5 dev tun0 proto kernel scope link src 10.102.1.6 {OPENVPN_SERVER_IP} via 192.168.1.1 dev eth0 128.0.0.0/1 via 10.102.1.5 dev tun0 169.254.0.0/16 dev eth0 scope link metric 1000 192.168.1.0/24 dev eth0 proto kernel scope link src 192.168.1.6 metric 1 This makes all packets go via to the VPN tunnel except those destined for 192.168.1.0/24. Doing wget -qO- http://echoip.org shows the vpn server's address, as expected, the packets have 10.102.1.6 as source address (the vpn local ip), and are routed via tun0 ... as reported by tcpdump -i tun0 (tcpdump -i eth0 sees none of this traffic). What I tried was: create a 2nd routing table holding the 192.168.1.6/24 routing info (copied from the main table above) add an iptables -t mangle -I PREROUTING rule to mark packets destined for port 80 add an ip rule to match on the mangled packet and point it to the 2nd routing table add an ip rule for to 192.168.1.6 and from 192.168.1.6 to point to the 2nd routing table (though this is superfluous) changed the ipv4 filter validation to none in net.ipv4.conf.tun0.rp_filter=0 and net.ipv4.conf.eth0.rp_filter=0 I also tried an iptables mangle output rule, iptables nat prerouting rule. It still fails and I'm not sure what I'm missing: iptables mangle prerouting: packet still goes via vpn iptables mangle output: packet times out Is it not the case that to achieve what I want, then when doing wget http://echoip.org I should change the packet's source address to 192.168.1.6 before routing it off? But if I do that, the response from the http server would be routed back to 192.168.1.6 and wget would not see it as it is still bound to tun0 (the vpn interface)? Can a kind soul please help? What commands would you execute after the openvpn connects to achieve what I want? Looking forward to hair regrowth ...

    Read the article

  • Htaccess strange behaviour with Nginx

    - by Termos
    I have a site running on Nginx (v1.0.14) serving as reverse proxy which proxies requests to Apache (v2.2.19). So Nginx runs on port 80, Apache is on 8080. Overall site works fine except that i cannot block access to certain directories with .htaccess file. For example i have 'my-protected-directory' on 'www.site.com' Inside it i have htaccess with following code: <Files *> order deny,allow deny from all allow from 1.2.3.4 <--- my ip address here </Files> When i try to access this page with my ip (1.2.3.4) i get 404 error which is not what i expect: http://www.site.com/my-protected-directory However everything works as expected when this page is served directly by Apache. I can see this page, everyone else can't. http://www.site.com:8080/my-protected-directory Update. Nginx config (7.1.3.7 is site ip.): user apache; worker_processes 4; error_log logs/error.log; pid logs/nginx.pid; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; sendfile on; keepalive_timeout 65; gzip on; gzip_min_length 1024; gzip_http_version 1.1; gzip_proxied any; gzip_comp_level 5; gzip_types text/plain text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript image/x-icon; server { listen 80; server_name www.site.com site.com 7.1.3.7; access_log logs/host.access.log main; # serve static files location ~* ^.+.(jpg|jpeg|gif|png|ico|css|zip|tgz|gz|rar|bz2|doc|xls|exe|pdf|ppt|txt|tar|mid|midi|wav|bmp|rtf|js)$ { root /var/www/vhosts/www.site.com/httpdocs; proxy_set_header Range ""; expires 30d; } # pass requests for dynamic content to Apache location / { proxy_redirect off; proxy_set_header X-Real-IP $remote_addr; proxy_set_header Host $http_host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Range ""; proxy_pass http://7.1.3.7:8080; } } Could please anyone tell me what is wrong and how this can be fixed ?

    Read the article

  • How did what appears to be a virus get on my computer? (explanation of situation enclosed)

    - by Massimo
    My system is Windows XP SP3, updated with the latest patches. The PC is connected to a Cisco 877 ADSL router, which does NAT from the internal network to its single static public IP address. There are no forwarded ports, and the router's management console can only be accessed from the inside. I was doing two things: working on a remote office machine via VPN and browsing some web pages on the Cisco web site. The remote network is absolutely safe (it's a lab network, four virtual servers, no publicly accessible services and no users at all; also, none of what I'm going to describe ever happened there). The Cisco web site... well, I suppose is quite safe, too. Suddenly, something happened. Strange popups appears anywhere; programs claiming they're "antimalware", "antispyware" et so on begins autoinstalling; fake Windows Update and Security Center icons pop up in the system tray. svchost.exe began crashing repeatedly. Then, finally, after some minutes of this... BSOD. And, upon rebooting, BSOD again. Even in safe mode. Ok, that was obviously some virus/trojan/whatever. I had to install a new copy of Windows on another partition to clean things up. I found strange executables, services and DLLs almost anywhere. Amongst the other things, user32.dll and ndis.sys had been replaced. A fake software called "Antimalware Doctor" had been installed. There were services with completely random names or even GUIDs (!), and also ones called "IpSect" and "Darkness". There were executable files without an .exe extension. There were even two boot-class drivers, which I'm quite sure are the ones that finally caused the system to crash. A true massacre. Ok, now the questions: What the hell was that?!? It was something more than a simple virus! How did it manage to attack my computer, as I am behind a firewall and was not doing anything even only potentially harmful on the web at the time?

    Read the article

  • Echo 404 directly from nginx to improve performance

    - by user64204
    I am in charge of production servers serving static content for a website. Those servers are constantly being crawled by bots looking for potential exploits (which isn't that much of a problem security-wise because no application can be reached behind the web server) but generates thousands of 404 per day, sometimes per hour. I am looking into ways of blocking those requests but it's tricky (you want to make sure you don't block legitimate traffic and these bots are becoming more and more clever at looking like they're legit) and is going to take me a while to find an acceptable solution. In the meantime I would like to reduce the performance impact of serving those 404 pages. Indeed we're using nginx which by default is configured to serve it's 404 page from the disk (This can be changed using the error_page directive but in the end the 404 will either have to be served from disk or from another external source (e.g. upstream application which would be worst)) which isn't ideal. I ran a test with ab on my local machine with a basic configuration: in one case I echo a message directly from nginx so the disk isn't touched at all, in the other case I hit a missing page and nginx serves its 404 from disk. server { # [...] the default nginx stuff location / { } location /this_page_exists { echo "this page was found"; } } Here are the test results (my laptop has Intel(R) Core(TM) i7-2670QM + SSD in case you're wondering why they are so high): $ ab -n 500000 -c 1000 http://localhost/this_page_exists Requests per second: 25609.16 [#/sec] (mean) $ ab -n 500000 -c 1000 http://localhost/this_page_doesnt_exists Requests per second: 22905.72 [#/sec] (mean) As you can see, returning a value with echo is 11% ((25609-22905)÷22905×100) faster than serving the 404 page from disk. Accordingly I would like to echo a simple 404 Page not Found string from nginx. I tried many things so far but they all failed, essentially the idea was this: location / { try_files $uri @not_found; } location @not_found { echo "404 - Page not found"; } The problem is that as soon as the echo directive is used, the http response code is set to 200. I tried changing that by doing error_page 200 = 400 but that breaks the configuration. How can I serve a 404 page directly from nginx? (without hacking the source which may be might next step)

    Read the article

  • Wifi randomly drops on Windows 8 laptop

    - by JosiahS
    First of all, I did a lot of research on this problem, and I wasn't able to come to any helpful conclusion. I've finally decided that I need advice from those who might know where to look. So don't let me down. :P I used to have an older Windows 7 laptop, which worked great for basic office and web browsing. However, I wanted something that would play actual modern games. So I recently bought a Sager NP8235 with the Intel Wireless-AC 7260 wifi card, and installed Windows 8 Pro on it. And ever since, I've been having problems with the wifi. Generally, what happens is if I leave the laptop on but inactive for an extended amount of time (I've estimated it around an hour to two), the wifi will start dropping randomly. If I happened to have a download going at the time, it usually causes the download to fail. Or, if I put the laptop to sleep overnight, the next morning I usually have to restart the computer because the wifi device apparently stops working (it literally won't turn on). Also, and most frustrating, whenever I'm on a video chat (like Skype), after about ten minutes, the connection will start lagging like crazy, until it forces Skype to end the call. After that, I usually have to disable and reenable the wifi to get it working again. I know it isn't our internet, because all the other computers in our house (~8) don't have any issues. Even the old Windows 7 laptop (connected also over wifi) works just fine, scoring the normal ~3Mbps average on speedtest.net (yes, I know our internet is slow, we live out in the country). Additionally, when I connect the Sager directly to the router via ethernet, the internet instantly starts working just great. Like I said, I've done a lot of googling to figure out what's going on, and I haven't been able to find anything that worked for me. Is it Windows 8 conflicting with the Wifi drivers? As of this writing, I have the Intel drivers v16.1.5.2 installed (without the extra Intel software). Or is it our router? It's a TP-Link TL-WR841ND, set to the default settings. The Sager is currently being assigned to a static IP, if that makes any difference. And yet, the old windows 7 laptop has a much more stable connection than the Sager. Anyone have any ideas? At this point, I'd appreciate even knowing what the problem is.

    Read the article

  • Gentoo box can't cURL or ping after restarting net.eth1

    - by Curlybraces
    Hi all, the following is completely baffling me. We currently have a gentoo box which acts as our LAMP, DNS, DHCP server. This is assigned a static IP on the network. This server is connected directly to the internet via a BT BusinessHub Router. The server is also connected to a patch panel/switch port which connects the remaining office (around 10 PC's) to the server. Everything has been plain sailing until the other day when the server was restarted. For some reason now only portions of network accessibility is available depending on which ethernet device was last restarted. Restarting net.eth0 allows the office server to cURL, ping, etc but stops all networked PC's from accessing the internet. Then restarting net.eth1 restores all internet to the network but stops the server from curling, pinging, etc again. However, even when the server can't ping, curl, etc, I can still remote SSH and remote MySQL connect from the server command line to other external servers that we own. Here's my route map (router is 192.168.1.254): Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 192.168.1.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 192.168.1.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1 169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth1 127.0.0.0 0.0.0.0 255.0.0.0 U 0 0 0 lo 0.0.0.0 192.168.1.254 0.0.0.0 UG 0 0 0 eth1 Here's my /etc/conf.d/net: iface_eth0="192.168.1.99 broadcast 192.168.1.255 netmask 255.255.255.0" iface_eth1="dhcp" None of the above have ever been changed however. Things have just ceased to operate correctly, which makes me think it's a freshly added Iptables rule. Here's the Iptables Filter table: Chain INPUT (policy ACCEPT) target prot opt source destination DROP tcp -- ##.##.##.## anywhere tcp dpt:ssh ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED ACCEPT all -- anywhere anywhere ACCEPT tcp -- anywhere anywhere tcp dpt:2199 ACCEPT tcp -- anywhere anywhere tcp dpt:3199 ACCEPT tcp -- ##.###.###.## anywhere tcp dpt:http ACCEPT tcp -- ###.###.##.## anywhere tcp dpt:2199 ACCEPT tcp -- ##.###.###.### anywhere tcp dpt:http ACCEPT tcp -- ##.###.##.## anywhere tcp dpt:http ACCEPT tcp -- ##.###.###.### anywhere tcp dpt:3128 ACCEPT udp -- ##.###.###.### anywhere udp dpt:3128 ACCEPT tcp -- ##.###.###.### anywhere tcp dpt:http ACCEPT tcp -- ##.###.###.### anywhere tcp dpt:https Chain FORWARD (policy ACCEPT) target prot opt source destination ACCEPT all -- anywhere ##.###.###.## DROP all -- anywhere ##.###.###.## ACCEPT all -- anywhere anywhere state NEW,ESTABLISHED Chain OUTPUT (policy ACCEPT) target prot opt source destination ACCEPT udp -- anywhere anywhere udp spt:2199 ACCEPT udp -- anywhere anywhere udp spt:4817 ACCEPT udp -- anywhere anywhere udp spt:4819 ACCEPT udp -- anywhere anywhere udp spt:3199 Help gratefully appreciated.

    Read the article

  • nginx rewrite rule to convert URL segments to query string parameters

    - by Nick
    I'm setting up an nginx server for the first time, and having some trouble getting the rewrite rules right for nginx. The Apache rules we used were: See if it's a real file or directory, if so, serve it, then send all requests for / to Director.php DirectoryIndex Director.php If the URL has one segment, pass it as rt RewriteRule ^/([a-zA-Z0-9\-\_]+)/$ /Director.php?rt=$1 [L,QSA] If the URL has two segments, pass it as rt and action RewriteRule ^/([a-zA-Z0-9\-\_]+)/([a-zA-Z0-9\-\_]+)/$ /Director.php?rt=$1&action=$2 [L,QSA] My nginx config file looks like: server { ... location / { try_files $uri $uri/ /index.php; } location ~ \.php$ { fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } } How do I get the URL segments into Query String Parameters like in the Apache rules above? UPDATE 1 Trying Pothi's approach: # serve static files directly location ~* ^.+\.(jpg|jpeg|gif|css|png|js|ico|html)$ { access_log off; expires 30d; } location / { try_files $uri $uri/ /Director.php; rewrite "^/([a-zA-Z0-9\-\_]+)/$" "/Director.php?rt=$1" last; rewrite "^/([a-zA-Z0-9\-\_]+)/([a-zA-Z0-9\-\_]+)/$" "/Director.php?rt=$1&action=$2" last; } location ~ \.php$ { fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } This produces the output No input file specified. on every request. I'm not clear on if the .php location gets triggered (and subsequently passed to php) when a rewrite in any block indicates a .php file or not. UPDATE 2 I'm still confused on how to setup these location blocks and pass the parameters. location /([a-zA-Z0-9\-\_]+)/ { fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME ${document_root}Director.php?rt=$1{$args}; include fastcgi_params; } UPDATE 3 It looks like the root directive was missing, which caused the No input file specified. message. Now that this is fixed, I get the index file as if the URL were / on every request regardless of the number of URL segments. It appears that my location regular expression is being ignored. My current config is: # This location is ignored: location /([a-zA-Z0-9\-\_]+)/ { fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index Director.php; set $args $query_string&rt=$1; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } location / { try_files $uri $uri/ /Director.php; } location ~ \.php$ { fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index Director.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; }

    Read the article

  • NGINX rewrite rules help. Redirect not working and want to get rid of index.php in urls

    - by Tamerax
    hey! I have 2 questions for nginx users. 1) I'm trying to setup my joomla server onto my new linode running NGINX and after much (like days) of searching and testing, I finally have a config that works with with SEF url plugins...sorta. I was using an apache system on the old server and it used mod_rewrite and life was fine in terms of SEF. Since NGINX doesn't have mod_rewrite, I found something that works BUT it constantly leaves index.php in the urls. ex: http://mysite.com/index.php/forum i want it to be just http://mysite.com/forum but without mod_rewrite it doesn't seem to be possible in joomla that i'm aware of. I know in wordpress it IS possible but I have to use a plugin. Here is my config file: server { listen 80; server_name mysite.com www.mysite.com; access_log /home/public_html/mysite.com/log/access.log; error_log /home/public_html/mysite.com/log/error.log; root /home/public_html/mysite.com/public/; large_client_header_buffers 4 8k; # prevent some 400 errors index index.php index.html; fastcgi_index index.php; location / { expires 30d; error_page 404 = @joomla; log_not_found off; } # Rewrite location @joomla { rewrite ^(.*)$ /index.php?q=last; } # Static Files location ~* ^.+.(jpg|jpeg|gif|css|png|js|ico)$ { access_log off; expires 30d; } # PHP location ~ \.php { keepalive_timeout 0; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; include /usr/local/nginx/conf/fastcgi_params; fastcgi_param SCRIPT_FILENAME /home/public_html/mysite.com/public /$fastcgi_script_name; } } 2) second question should be easy but i can't get it to work. I want to use the same config I posted above and have either mysite.com or www.mysite.com both forward to mysite.com/portal. Basically when you hit up the front page with or without the www, it all gets forwarded to a sub directory on the server I called Portal. I have tried several variations of: rewrite ^/(.*) http://www.example.com/portal/$1 permanent; but it usually ends with firefox telling me there is some crazy loop happening the address bar saying something like mysite.com/portalportalportalportalportal.........on and on. So, any help on either of these issues would be awesome!! Thanks!!

    Read the article

  • .htaccess ignored, SPECIFIC to EC2 - not the usual suspects

    - by tedneigerux
    I run 8-10 EC2 based web servers, so my experience is many hours, but is limited to CentOS; specifically Amazon's distribution. I'm installing Apache using yum, so therefore getting Amazon's default compilation of Apache. I want to implement canonical redirects from non-www (bare/root) domain to www.domain.com for SEO using mod_rewrite BUT MY .htaccess FILE IS CONSISTENTLY IGNORED. My troubleshooting steps (outlined below) lead me to believe it's something specific to Amazon's build of Apache. TEST CASE Launch a EC2 Instance, e.g. Amazon Linux AMI 2013.03.1 SSH to the Server Run the commands: $ sudo yum install httpd $ sudo apachectl start $ sudo vi /etc/httpd/conf/httpd.conf $ sudo apachectl restart $ sudo vi /var/www/html/.htaccess In httpd.conf I changed the following, in the DOCROOT section / scope: AllowOverride All In .htaccess, added: (EDIT, I added RewriteEngine On later) RewriteCond %{HTTP_HOST} ^domain\.com$ [NC] RewriteRule ^/(.*) http://www.domain.com/$1 [R=301,L] Permissions on .htaccess are correct, AFAI can tell: $ ls -al /var/www/html/.htaccess -rwxrwxr-x 1 git apache 142 Jun 18 22:58 /var/www/html/.htaccess Other info: $ httpd -v Server version: Apache/2.2.24 (Unix) Server built: May 20 2013 21:12:45 $ httpd -M Loaded Modules: core_module (static) ... rewrite_module (shared) ... version_module (shared) Syntax OK EXPECTED BEHAVIOR $ curl -I domain.com HTTP/1.1 301 Moved Permanently Date: Wed, 19 Jun 2013 12:36:22 GMT Server: Apache/2.2.24 (Amazon) Location: http://www.domain.com/ Connection: close Content-Type: text/html; charset=UTF-8 ACTUAL BEHAVIOR $ curl -I domain.com HTTP/1.1 200 OK Date: Wed, 19 Jun 2013 12:34:10 GMT Server: Apache/2.2.24 (Amazon) Connection: close Content-Type: text/html; charset=UTF-8 TROUBLESHOOTING STEPS In .htaccess, added: BLAH BLAH BLAH ERROR RewriteCond %{HTTP_HOST} ^domain\.com$ [NC] RewriteRule ^/(.*) http://www.domain.com/$1 [R=301,L] My server threw an error 500, so I knew the .htaccess file was processed. As expected, it created an Error log entry: [Wed Jun 19 02:24:19 2013] [alert] [client XXX.XXX.XXX.XXX] /var/www/html/.htaccess: Invalid command 'BLAH BLAH BLAH ERROR', perhaps misspelled or defined by a module not included in the server configuration Since I have root access on the server, I then tried moving my rewrite rule directly to the httpd.conf file. THIS WORKED. This tells us several important things are working. $ curl -I domain.com HTTP/1.1 301 Moved Permanently Date: Wed, 19 Jun 2013 12:36:22 GMT Server: Apache/2.2.24 (Amazon) Location: http://www.domain.com/ Connection: close Content-Type: text/html; charset=UTF-8 HOWEVER, it is bothering me that it didn't work in the .htaccess file. And I have other use cases where I need it to work in .htaccess (e.g. an EC2 instance with named virtual hosts). Thank you in advance for your help.

    Read the article

  • cakephp & nginx config/rewrite rules

    - by seanl
    Hi somebody please help me out, I've asked this at stackoverflow as well but not got much of a response and was debating whether it was programming or server related. I’m trying to setup a cakephp environment on a Centos server running Nginx with Fact CGI. I already have a wordpress site running on the server and a phpmyadmin site so I have PHP configured correctly. My problem is that I cannot get the rewrite rules setup correct in my vhost so that cake renders pages correctly i.e. with styling and so on. I’ve googled as much as possible and the main consensus from the sites like the one listed below is that I need to have the following rewrite rule in place location / { root /var/www/sites/somedomain.com/current; index index.php index.html; # If the file exists as a static file serve it # directly without running all # the other rewrite tests on it if (-f $request_filename) { break; } if (!-f $request_filename) { rewrite ^/(.+)$ /index.php?url=$1 last; break; } } http://blog.getintheloop.eu/2008/4/17/nginx-engine-x-rewrite-rules-for-cakephp problem is these rewrite assume you run cake directly out of the webroot which is not what I want to do. I have a standard setup for each site i.e. one folder per site containing the following folders log, backup, private and public. Public being where nginx is looking for its files to serve but I have cake installed in private with a symlink in public linking back to /private/cake/ this is my vhost server { listen 80; server_name app.domain.com; access_log /home/public_html/app.domain.com/log/access.log; error_log /home/public_html/app.domain.com/log/error.log; #configure Cake app to run in a sub-directory #Cake install is not in root, but elsewhere and configured #in APP/webroot/index.php** location /home/public_html/app.domain.com/private/cake { index index.php; if (!-e $request_filename) { rewrite ^/(.+)$ /home/public_html/app.domain.com/private/cake/$1 last; break; } } location /home/public_html/app.domain.com/private/cake/ { index index.php; if (!-e $request_filename) { rewrite ^/(.+)$ /home/public_html/app.domain.com/public/index.php?url=$1 last; break; } } # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 location ~ \.php$ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /home/public_html/app.domain.com/private/cake$fastcgi_script_name; include /etc/nginx/fastcgi_params; } } Now like I said I can see the main index.php of cake and have connected it to my DB but this page is without styling so before I proceed any further I would like to configure it correctly. What am I doing wrong………. Thanks seanl

    Read the article

  • How do I configure Reverse Group Membership Maintenance on an openldap server? (memberOf)

    - by emills
    I am currently working on integrating LDAP authentication into a system and I would like to restrict access based on LDAP group. The only way to do this is via a search filter and therefore I believe my only option to be the use of the "memberOf" attribute in my search filter. It is my understanding that the "memberOf" attribute is an operational attribute which can be created by the server for me anytime a new "member" attribute is created for any "groupOfNames" entry on the server. My main goal is to be able to add a "member" attribute to an existing "groupOfNames" entry and have a matching "memberOf" attribute be added to the DN I provide. What I have managed to achieve so far: I'm still pretty new to LDAP administration but based on what I found in the openldap admin's guide, it looks like Reverse Group Membership Maintence aka "memberof overlay" would achieve exactly the effect I am looking for. My server is currently running a package installation (slapd on ubuntu) of openldap 2.4.15 which uses "cn=config" style runtime configuration. Most of the examples I have found still reference the older "slapd.conf" method of static configuration and I have tried my best to adapt the configurations to the new directory based model. I have added the following entries to enable the memberof overlay module: Enable the module with olcModuleLoad cn=config/cn\=module\{0\}.ldif dn: cn=module{0} objectClass: olcModuleList cn: module{0} olcModulePath: /usr/lib/ldap olcModuleLoad: {0}back_hdb olcModuleLoad: {1}memberof.la structuralObjectClass: olcModuleList entryUUID: a410ce98-3fdf-102e-82cf-59ccb6b4d60d creatorsName: cn=config createTimestamp: 20090927183056Z entryCSN: 20091009174548.503911Z#000000#000#000000 modifiersName: cn=admin,cn=config modifyTimestamp: 20091009174548Z Enabled the overlay for the database and allowed it to use it's default settings (groupOfNames,member,memberOf,etc) cn=config/olcDatabase={1}hdb/olcOverlay\=\{0\}memberof dn: olcOverlay={0}memberof objectClass: olcMemberOf objectClass: olcOverlayConfig objectClass: olcConfig objectClass: top olcOverlay: {0}memberof structuralObjectClass: olcMemberOf entryUUID: 6d599084-490c-102e-80f6-f1a5d50be388 creatorsName: cn=admin,cn=config createTimestamp: 20091009104412Z olcMemberOfRefInt: TRUE entryCSN: 20091009173500.139380Z#000000#000#000000 modifiersName: cn=admin,cn=config modifyTimestamp: 20091009173500Z My current result: By using the above configuration, I am able to add a NEW "groupOfNames" with any number of "member" entries and have all the involved DNs updated with a "memberOf" attribute. This is part of the behavior I would expect. While I believe the following should have been accomplished with the memberof overlay, I still do not know how to do the following and I would gladly welcome any advice: Add a "member" attribute to an EXISTING "groupOfNames" and have a corresponding "memberOf" attribute be created automatically. Remove a "member" attribute and have the corresponding "memberOf" attribute" be removed automatically.

    Read the article

  • How to troubleshoot port forwarding on Windows 7 (64 Bit) with ICS enabled?

    - by LearnCocos2D
    I want to forward some ports (1666 for perforce, 8081 for Hudson) on my Internet Gateway machine. This machine is running Windows 7 (64 Bit, legal, user-account) and connected to the Internet via cable modem (it's not a router). The Windows machine is sharing its Internet Connection via ICS and that works fine on all connected computers. I can access the services via the gateway's public IP (95.x.x.x) on the given ports if they are running on the gateway machine itself. I've added the ports and destination IP address (192.168.0.18) in the Internet network adapter's Advanced Settings dialog (Sharing tab). That's the same dialog where you have a list of preconfigured services like HTTP, FTP and other incoming services. When I do that I can't connect to the services anymore. For some reason port forwarding isn't working. I have uninstalled Bitdefender because I wanted to check if the Firewall interferes. I've also disabled the Windows Firewall and Defender to no avail. I tried a freeware tool that helps to setup port forwarding but that didn't work either. The target machine is a Mac OS X computer whose Firewall is disabled. The IP is static. I can successfully connect to the services using the local IP address (192.168.0.18) from two different machines, including the gateway computer. So internally and externally it seems to me that the ports are open and not blocked, and the issue is with port forwarding itself. From what I understand it should be enough to add an entry to the Advanced Settings dialog to enable port forwarding when there are no firewalls interfering. How can I troubleshoot why port forwarding isn't working for me? What steps should I follow to alleviate the issue? PS: I gladly accept command line solutions. Other things I've tried: adding an Inbound Rule to Windows Firewall for the 1666, 8081 ports trying with Windows Firewall enabled and disabled disabling/enabling the network adapter double-checked that the IP addresses are correct mapping a different incoming port to the service's actual port followed or checked the misc tips in this article What I haven't dared trying yet (let me know if it's worth a shot): disable/enable ICS remove all network adapters (via Control Panel), then re-install and re-configure them

    Read the article

  • No external network on ubuntu 9.10, though dns works..

    - by user29368
    Hi, I have a weird problem I cant solve. I have several computers, two with xubuntu 9.10 One of them, acting as a media server, has stopped to work when it comes to external network.. I can do for example: ping google.com Which gives me an ip adress back, like: name@Media:/etc$ ping google.com PING google.com (66.102.9.147) 56(84) bytes of data. That tells me it reaches the dns?, but I get no response at all... If I ping a local computer all works fine. I can also reach the computer via ssh without any problems. I have always used network manager, but now I uninstalled it and made the settings manually like this: /etc/network/interfaces auto lo iface lo inet loopback auto eth0 iface eth0 inet static address 192.168.1.52 netmask 255.255.255.0 network 192.168.1.0 broadcast 192.168.1.255 gateway 192.168.1.1 Still no luck. I have no specific settings for this one in my router, and all the other computers, including my win laptop works fine. This is very annoying since I cant even do an update or anything.. ifconfig looks like this: eth0 Link encap:Ethernet HWaddr 00:24:1d:9f:10:89 inet addr:192.168.1.52 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::224:1dff:fe9f:1089/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:15410 errors:0 dropped:0 overruns:0 frame:0 TX packets:2693 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:1167398 (1.1 MB) TX bytes:694973 (694.9 KB) Interrupt:27 Base address:0xe000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:2150 errors:0 dropped:0 overruns:0 frame:0 TX packets:2150 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:143456 (143.4 KB) TX bytes:143456 (143.4 KB) route -n like this Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 192.168.1.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 169.254.0.0 0.0.0.0 255.255.0.0 U 1000 0 0 eth0 0.0.0.0 192.168.1.1 0.0.0.0 UG 100 0 0 eth0 I do not know where the adress starting with 169.254 comes from.. Could that be a part of the problem? Hoping for some assistance since Im totally stuck here.. /george

    Read the article

  • [CentOS 4.8] nslookup resolves domains to IPs, but I can't get a response to pings to external servers

    - by Beco
    I have a fresh install of CentOS 4.8 running on an internal development server. I haven't done anything to it besides setting up sudoers and SSH. I can SSH into the server and from there resolve domains to IPs and ping internal servers, but for some reason I don't get any response from pinging external servers. The software firewall is disabled, and the problem is present with both static and DHCP-assigned network configurations. The network domain controller is a Windows Server 2003 box. $ nslookup google.com Server: 10.254.2.5 Address: 10.254.2.5#53 Non-authoritative answer: Name: google.com Address: 74.125.47.147 Name: google.com Address: 74.125.47.99 <etc...> 10.254.2.5 is the Win2K3 server. $ ping google.com PING google.com (74.125.47.106) 56(84) bytes of data. It just hangs here indefinitely. $ cat /etc/resolv.conf ; generated by /sbin/dhclient-script search <...snip...>.local nameserver 10.254.2.5 nameserver 10.254.2.124 10.254.2.124 is the backup DC server, which is currently off and tombstoned by this point. The snipped section is our company name. # ifconfig eth0 Link encap:Ethernet HWaddr <snip> inet addr:10.254.2.101 Bcast:10.254.2.255 Mask:255.255.255.0 inet6 addr: <snip>/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:80066 errors:0 dropped:0 overruns:0 frame:0 TX packets:4421 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:7810133 (7.4 MiB) TX bytes:590550 (576.7 KiB) Interrupt:225 Base address:0xc000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:32 errors:0 dropped:0 overruns:0 frame:0 TX packets:32 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:8104 (7.9 KiB) TX bytes:8104 (7.9 KiB) # route -n Kernel IP routing table Destination Gateway Genmask Flags Metric Ref Use Iface 10.254.2.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0 0.0.0.0 10.254.2.5 0.0.0.0 UG 0 0 0 eth0 And, for good measure, a snapshot of the current ethernet config via the system-config-network GUI. Edit: I don't yet have enough rep to post images, so here's a link. Sorry! system-config-network snapshot I'm pretty green when it comes to setting up *nix dev servers and network configuration in general, so please let me know if I've left out critical information, or posted information I shouldn't have posted. Thanks!

    Read the article

  • TFS 2010 SDK: Connecting to TFS 2010 Programmatically&ndash;Part 1

    - by Tarun Arora
    Technorati Tags: Team Foundation Server 2010,TFS 2010 SDK,TFS API,TFS Programming,TFS ALM   Download Working Demo Great! You have reached that point where you would like to extend TFS 2010. The first step is to connect to TFS programmatically. 1. Download TFS 2010 SDK => http://visualstudiogallery.msdn.microsoft.com/25622469-19d8-4959-8e5c-4025d1c9183d?SRC=VSIDE 2. Alternatively you can also download this from the visual studio extension manager 3. Create a new Windows Forms Application project and add reference to TFS Common and client dlls Note - If Microsoft.TeamFoundation.Client and Microsoft.TeamFoundation.Common do not appear on the .NET tab of the References dialog box, use the Browse tab to add the assemblies. You can find them at %ProgramFiles%\Microsoft Visual Studio 10.0\Common7\IDE\ReferenceAssemblies\v2.0. using Microsoft.TeamFoundation.Client; using Microsoft.TeamFoundation.Framework.Client; using Microsoft.TeamFoundation.Framework.Common;   4. There are several ways to connect to TFS, the two classes of interest are, Option 1 – Class – TfsTeamProjectCollectionClass namespace Microsoft.TeamFoundation.Client { public class TfsTeamProjectCollection : TfsConnection { public TfsTeamProjectCollection(RegisteredProjectCollection projectCollection); public TfsTeamProjectCollection(Uri uri); public TfsTeamProjectCollection(RegisteredProjectCollection projectCollection, IdentityDescriptor identityToImpersonate); public TfsTeamProjectCollection(Uri uri, ICredentials credentials); public TfsTeamProjectCollection(Uri uri, ICredentialsProvider credentialsProvider); public TfsTeamProjectCollection(Uri uri, IdentityDescriptor identityToImpersonate); public TfsTeamProjectCollection(RegisteredProjectCollection projectCollection, ICredentials credentials, ICredentialsProvider credentialsProvider); public TfsTeamProjectCollection(Uri uri, ICredentials credentials, ICredentialsProvider credentialsProvider); public TfsTeamProjectCollection(RegisteredProjectCollection projectCollection, ICredentials credentials, ICredentialsProvider credentialsProvider, IdentityDescriptor identityToImpersonate); public TfsTeamProjectCollection(Uri uri, ICredentials credentials, ICredentialsProvider credentialsProvider, IdentityDescriptor identityToImpersonate); public override CatalogNode CatalogNode { get; } public TfsConfigurationServer ConfigurationServer { get; internal set; } public override string Name { get; } public static Uri GetFullyQualifiedUriForName(string name); protected override object GetServiceInstance(Type serviceType, object serviceInstance); protected override object InitializeTeamFoundationObject(string fullName, object instance); } } Option 2 – Class – TfsConfigurationServer namespace Microsoft.TeamFoundation.Client { public class TfsConfigurationServer : TfsConnection { public TfsConfigurationServer(RegisteredConfigurationServer application); public TfsConfigurationServer(Uri uri); public TfsConfigurationServer(RegisteredConfigurationServer application, IdentityDescriptor identityToImpersonate); public TfsConfigurationServer(Uri uri, ICredentials credentials); public TfsConfigurationServer(Uri uri, ICredentialsProvider credentialsProvider); public TfsConfigurationServer(Uri uri, IdentityDescriptor identityToImpersonate); public TfsConfigurationServer(RegisteredConfigurationServer application, ICredentials credentials, ICredentialsProvider credentialsProvider); public TfsConfigurationServer(Uri uri, ICredentials credentials, ICredentialsProvider credentialsProvider); public TfsConfigurationServer(RegisteredConfigurationServer application, ICredentials credentials, ICredentialsProvider credentialsProvider, IdentityDescriptor identityToImpersonate); public TfsConfigurationServer(Uri uri, ICredentials credentials, ICredentialsProvider credentialsProvider, IdentityDescriptor identityToImpersonate); public override CatalogNode CatalogNode { get; } public override string Name { get; } protected override object GetServiceInstance(Type serviceType, object serviceInstance); public TfsTeamProjectCollection GetTeamProjectCollection(Guid collectionId); protected override object InitializeTeamFoundationObject(string fullName, object instance); } }   Note – The TeamFoundationServer class is obsolete. Use the TfsTeamProjectCollection or TfsConfigurationServer classes to talk to a 2010 Team Foundation Server. In order to talk to a 2005 or 2008 Team Foundation Server use the TfsTeamProjectCollection class. 5. Sample code for programmatically connecting to TFS 2010 using the TFS 2010 API How do i know what the URI of my TFS server is, Note – You need to be have Team Project Collection view details permission in order to connect, expect to receive an authorization failure message if you do not have sufficient permissions. Case 1: Connect by Uri string _myUri = @"https://tfs.codeplex.com:443/tfs/tfs30"; TfsConfigurationServer configurationServer = TfsConfigurationServerFactory.GetConfigurationServer(new Uri(_myUri)); Case 2: Connect by Uri, prompt for credentials string _myUri = @"https://tfs.codeplex.com:443/tfs/tfs30"; TfsConfigurationServer configurationServer = TfsConfigurationServerFactory.GetConfigurationServer(new Uri(_myUri), new UICredentialsProvider()); configurationServer.EnsureAuthenticated(); Case 3: Connect by Uri, custom credentials In order to use this method of connectivity you need to implement the interface ICredentailsProvider public class ConnectByImplementingCredentialsProvider : ICredentialsProvider { public ICredentials GetCredentials(Uri uri, ICredentials iCredentials) { return new NetworkCredential("UserName", "Password", "Domain"); } public void NotifyCredentialsAuthenticated(Uri uri) { throw new ApplicationException("Unable to authenticate"); } } And now consume the implementation of the interface, string _myUri = @"https://tfs.codeplex.com:443/tfs/tfs30"; ConnectByImplementingCredentialsProvider connect = new ConnectByImplementingCredentialsProvider(); ICredentials iCred = new NetworkCredential("UserName", "Password", "Domain"); connect.GetCredentials(new Uri(_myUri), iCred); TfsConfigurationServer configurationServer = TfsConfigurationServerFactory.GetConfigurationServer(new Uri(_myUri), connect); configurationServer.EnsureAuthenticated();   6. Programmatically query TFS 2010 using the TFS SDK for all Team Project Collections and retrieve all Team Projects and output the display name and description of each team project. CatalogNode catalogNode = configurationServer.CatalogNode; ReadOnlyCollection<CatalogNode> tpcNodes = catalogNode.QueryChildren( new Guid[] { CatalogResourceTypes.ProjectCollection }, false, CatalogQueryOptions.None); // tpc = Team Project Collection foreach (CatalogNode tpcNode in tpcNodes) { Guid tpcId = new Guid(tpcNode.Resource.Properties["InstanceId"]); TfsTeamProjectCollection tpc = configurationServer.GetTeamProjectCollection(tpcId); // Get catalog of tp = 'Team Projects' for the tpc = 'Team Project Collection' var tpNodes = tpcNode.QueryChildren( new Guid[] { CatalogResourceTypes.TeamProject }, false, CatalogQueryOptions.None); foreach (var p in tpNodes) { Debug.Write(Environment.NewLine + " Team Project : " + p.Resource.DisplayName + " - " + p.Resource.Description + Environment.NewLine); } }   Output   You can download a working demo that uses TFS SDK 2010 to programmatically connect to TFS 2010. Screen Shots of the attached demo application, Share this post :

    Read the article

  • How do i return integers from a string ?

    - by kannan.ambadi
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Suppose you are passing a string(for e.g.: “My name has 1 K, 2 A and 3 N”)  which may contain integers, letters or special characters. I want to retrieve only numbers from the input string. We can implement it in many ways such as splitting the string into an array or by using TryParse method. I would like to share another idea, that’s by using Regular expressions. All you have to do is, create an instance of Regular Expression with a specified pattern for integer. Regular expression class defines a method called Split, which splits the specified input string based on the pattern provided during object initialization.     We can write the code as given below:   public static int[] SplitIdSeqenceValues(object combinedArgs)         {             var _argsSeperator = new Regex(@"\D+", RegexOptions.Compiled);               string[] splitedIntegers = _argsSeperator.Split(combinedArgs.ToString());               var args = new int[splitedIntegers.Length];               for (int i = 0; i < splitedIntegers.Length; i++)                 args[i] = MakeSafe.ToSafeInt32(splitedIntegers[i]);                           return args;         }    It would be better, if we set to RegexOptions.Compiled so that the regular expression will have performance boost by faster compilation.   Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Happy Programming  :))   

    Read the article

  • SQL SERVER – 2008 – Introduction to Snapshot Database – Restore From Snapshot

    - by pinaldave
    Snapshot database is one of the most interesting concepts that I have used at some places recently. Here is a quick definition of the subject from Book On Line: A Database Snapshot is a read-only, static view of a database (the source database). Multiple snapshots can exist on a source database and can always reside on the same server instance as the database. Each database snapshot is consistent, in terms of transactions, with the source database as of the moment of the snapshot’s creation. A snapshot persists until it is explicitly dropped by the database owner. If you do not know how Snapshot database work, here is a quick note on the subject. However, please refer to the official description on Book-on-Line for accuracy. Snapshot database is a read-only database created from an original database called the “source database”. This database operates at page level. When Snapshot database is created, it is produced on sparse files; in fact, it does not occupy any space (or occupies very little space) in the Operating System. When any data page is modified in the source database, that data page is copied to Snapshot database, making the sparse file size increases. When an unmodified data page is read in the Snapshot database, it actually reads the pages of the original database. In other words, the changes that happen in the source database are reflected in the Snapshot database. Let us see a simple example of Snapshot. In the following exercise, we will do a few operations. Please note that this script is for demo purposes only- there are a few considerations of CPU, DISK I/O and memory, which will be discussed in the future posts. Create Snapshot Delete Data from Original DB Restore Data from Snapshot First, let us create the first Snapshot database and observe the sparse file details. USE master GO -- Create Regular Database CREATE DATABASE RegularDB GO USE RegularDB GO -- Populate Regular Database with Sample Table CREATE TABLE FirstTable (ID INT, Value VARCHAR(10)) INSERT INTO FirstTable VALUES(1, 'First'); INSERT INTO FirstTable VALUES(2, 'Second'); INSERT INTO FirstTable VALUES(3, 'Third'); GO -- Create Snapshot Database CREATE DATABASE SnapshotDB ON (Name ='RegularDB', FileName='c:\SSDB.ss1') AS SNAPSHOT OF RegularDB; GO -- Select from Regular and Snapshot Database SELECT * FROM RegularDB.dbo.FirstTable; SELECT * FROM SnapshotDB.dbo.FirstTable; GO Now let us see the resultset for the same. Now let us do delete something from the Original DB and check the same details we checked before. -- Delete from Regular Database DELETE FROM RegularDB.dbo.FirstTable; GO -- Select from Regular and Snapshot Database SELECT * FROM RegularDB.dbo.FirstTable; SELECT * FROM SnapshotDB.dbo.FirstTable; GO When we check the details of sparse file created by Snapshot database, we will find some interesting details. The details of Regular DB remain the same. It clearly shows that when we delete data from Regular/Source DB, it copies the data pages to Snapshot database. This is the reason why the size of the snapshot DB is increased. Now let us take this small exercise to  the next level and restore our deleted data from Snapshot DB to Original Source DB. -- Restore Data from Snapshot Database USE master GO RESTORE DATABASE RegularDB FROM DATABASE_SNAPSHOT = 'SnapshotDB'; GO -- Select from Regular and Snapshot Database SELECT * FROM RegularDB.dbo.FirstTable; SELECT * FROM SnapshotDB.dbo.FirstTable; GO -- Clean up DROP DATABASE [SnapshotDB]; DROP DATABASE [RegularDB]; GO Now let us check the details of the select statement and we can see that we are successful able to restore the database from Snapshot Database. We can clearly see that this is a very useful feature in case you would encounter a good business that needs it. I would like to request the readers to suggest more details if they are using this feature in their business. Also, let me know if you think it can be potentially used to achieve any tasks. Complete Script of the afore- mentioned operation for easy reference is as follows: USE master GO -- Create Regular Database CREATE DATABASE RegularDB GO USE RegularDB GO -- Populate Regular Database with Sample Table CREATE TABLE FirstTable (ID INT, Value VARCHAR(10)) INSERT INTO FirstTable VALUES(1, 'First'); INSERT INTO FirstTable VALUES(2, 'Second'); INSERT INTO FirstTable VALUES(3, 'Third'); GO -- Create Snapshot Database CREATE DATABASE SnapshotDB ON (Name ='RegularDB', FileName='c:\SSDB.ss1') AS SNAPSHOT OF RegularDB; GO -- Select from Regular and Snapshot Database SELECT * FROM RegularDB.dbo.FirstTable; SELECT * FROM SnapshotDB.dbo.FirstTable; GO -- Delete from Regular Database DELETE FROM RegularDB.dbo.FirstTable; GO -- Select from Regular and Snapshot Database SELECT * FROM RegularDB.dbo.FirstTable; SELECT * FROM SnapshotDB.dbo.FirstTable; GO -- Restore Data from Snapshot Database USE master GO RESTORE DATABASE RegularDB FROM DATABASE_SNAPSHOT = 'SnapshotDB'; GO -- Select from Regular and Snapshot Database SELECT * FROM RegularDB.dbo.FirstTable; SELECT * FROM SnapshotDB.dbo.FirstTable; GO -- Clean up DROP DATABASE [SnapshotDB]; DROP DATABASE [RegularDB]; GO Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: SQL, SQL Authority, SQL Backup and Restore, SQL Data Storage, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • WCF REST on .Net 4.0

    - by AngelEyes
    A simple and straight forward article taken from: http://christopherdeweese.com/blog2/post/drop-the-soap-wcf-rest-and-pretty-uris-in-net-4 Drop the Soap: WCF, REST, and Pretty URIs in .NET 4 Years ago I was working in libraries when the Web 2.0 revolution began.  One of the things that caught my attention about early start-ups using the AJAX/REST/Web 2.0 model was how nice the URIs were for their applications.  Those were my first impressions of REST; pretty URIs.  Turns out there is a little more to it than that. REST is an architectural style that focuses on resources and structured ways to access those resources via the web.  REST evolved as an “anti-SOAP” movement, driven by developers who did not want to deal with all the complexity SOAP introduces (which is al lot when you don’t have frameworks hiding it all).  One of the biggest benefits to REST is that browsers can talk to rest services directly because REST works using URIs, QueryStrings, Cookies, SSL, and all those HTTP verbs that we don’t have to think about anymore. If you are familiar with ASP.NET MVC then you have been exposed to rest at some level.  MVC is relies heavily on routing to generate consistent and clean URIs.  REST for WCF gives you the same type of feel for your services.  Let’s dive in. WCF REST in .NET 3.5 SP1 and .NET 4 This post will cover WCF REST in .NET 4 which drew heavily from the REST Starter Kit and community feedback.  There is basic REST support in .NET 3.5 SP1 and you can also grab the REST Starter Kit to enable some of the features you’ll find in .NET 4. This post will cover REST in .NET 4 and Visual Studio 2010. Getting Started To get started we’ll create a basic WCF Rest Service Application using the new on-line templates option in VS 2010: When you first install a template you are prompted with this dialog: Dude Where’s my .Svc File? The WCF REST template shows us the new way we can simply build services.  Before we talk about what’s there, let’s look at what is not there: The .Svc File An Interface Contract Dozens of lines of configuration that you have to change to make your service work REST in .NET 4 is greatly simplified and leverages the Web Routing capabilities used in ASP.NET MVC and other parts of the web frameworks.  With REST in .NET 4 you use a global.asax to set the route to your service using the new ServiceRoute class.  From there, the WCF runtime handles dispatching service calls to the methods based on the Uri Templates. global.asax using System; using System.ServiceModel.Activation; using System.Web; using System.Web.Routing; namespace Blog.WcfRest.TimeService {     public class Global : HttpApplication     {         void Application_Start(object sender, EventArgs e)         {             RegisterRoutes();         }         private static void RegisterRoutes()         {             RouteTable.Routes.Add(new ServiceRoute("TimeService",                 new WebServiceHostFactory(), typeof(TimeService)));         }     } } The web.config contains some new structures to support a configuration free deployment.  Note that this is the default config generated with the template.  I did not make any changes to web.config. web.config <?xml version="1.0"?> <configuration>   <system.web>     <compilation debug="true" targetFramework="4.0" />   </system.web>   <system.webServer>     <modules runAllManagedModulesForAllRequests="true">       <add name="UrlRoutingModule" type="System.Web.Routing.UrlRoutingModule,            System.Web, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" />     </modules>   </system.webServer>   <system.serviceModel>     <serviceHostingEnvironment aspNetCompatibilityEnabled="true"/>     <standardEndpoints>       <webHttpEndpoint>         <!--             Configure the WCF REST service base address via the global.asax.cs file and the default endpoint             via the attributes on the <standardEndpoint> element below         -->         <standardEndpoint name="" helpEnabled="true" automaticFormatSelectionEnabled="true"/>       </webHttpEndpoint>     </standardEndpoints>   </system.serviceModel> </configuration> Building the Time Service We’ll create a simple “TimeService” that will return the current time.  Let’s start with the following code: using System; using System.ServiceModel; using System.ServiceModel.Activation; using System.ServiceModel.Web; namespace Blog.WcfRest.TimeService {     [ServiceContract]     [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)]     [ServiceBehavior(InstanceContextMode = InstanceContextMode.PerCall)]     public class TimeService     {         [WebGet(UriTemplate = "CurrentTime")]         public string CurrentTime()         {             return DateTime.Now.ToString();         }     } } The endpoint for this service will be http://[machinename]:[port]/TimeService.  To get the current time http://[machinename]:[port]/TimeService/CurrentTime will do the trick. The Results Are In Remember That Route In global.asax? Turns out it is pretty important.  When you set the route name, that defines the resource name starting after the host portion of the Uri. Help Pages in WCF 4 Another feature that came from the starter kit are the help pages.  To access the help pages simply append Help to the end of the service’s base Uri. Dropping the Soap Having dabbled with REST in the past and after using Soap for the last few years, the WCF 4 REST support is certainly refreshing.  I’m currently working on some REST implementations in .NET 3.5 and VS 2008 and am looking forward to working on REST in .NET 4 and VS 2010.

    Read the article

  • MEF CompositionInitializer for WPF

    - by Reed
    The Managed Extensibility Framework is an amazingly useful addition to the .NET Framework.  I was very excited to see System.ComponentModel.Composition added to the core framework.  Personally, I feel that MEF is one tool I’ve always been missing in my .NET development. Unfortunately, one perfect scenario for MEF tends to fall short of it’s full potential is in Windows Presentation Foundation development.  In particular, there are many times when the XAML parser constructs objects in WPF development, which makes composition of those parts difficult.  The current release of MEF (Preview Release 9) addresses this for Silverlight developers via System.ComponentModel.Composition.CompositionInitializer.  However, there is no equivalent class for WPF developers. The CompositionInitializer class provides the means for an object to compose itself.  This is very useful with WPF and Silverlight development, since it allows a View, such as a UserControl, to be generated via the standard XAML parser, and still automatically pull in the appropriate ViewModel in an extensible manner.  Glenn Block has demonstrated the usage for Silverlight in detail, but the same issues apply in WPF. As an example, let’s take a look at a very simple case.  Take the following XAML for a Window: <Window x:Class="WpfApplication1.MainView" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Title="MainWindow" Height="220" Width="300"> <Grid> <TextBlock Text="{Binding TheText}" /> </Grid> </Window> This does nothing but create a Window, add a simple TextBlock control, and use it to display the value of our “TheText” property in our DataContext class.  Since this is our main window, WPF will automatically construct and display this Window, so we need to handle constructing the DataContext and setting it ourselves. We could do this in code or in XAML, but in order to do it directly, we would need to hard code the ViewModel type directly into our XAML code, or we would need to construct the ViewModel class and set it in the code behind.  Both have disadvantages, and the disadvantages grow if we’re using MEF to compose our ViewModel. Ideally, we’d like to be able to have MEF construct our ViewModel for us.  This way, it can provide any construction requirements for our ViewModel via [ImportingConstructor], and it can handle fully composing the imported properties on our ViewModel.  CompositionInitializer allows this to occur. We use CompositionInitializer within our View’s constructor, and use it for self-composition of our View.  Using CompositionInitializer, we can modify our code behind to: public partial class MainView : Window { public MainView() { InitializeComponent(); CompositionInitializer.SatisfyImports(this); } [Import("MainViewModel")] public object ViewModel { get { return this.DataContext; } set { this.DataContext = value; } } } We then can add an Export on our ViewModel class like so: [Export("MainViewModel")] public class MainViewModel { public string TheText { get { return "Hello World!"; } } } MEF will automatically compose our application, decoupling our ViewModel injection to the DataContext of our View until runtime.  When we run this, we’ll see: There are many other approaches for using MEF to wire up the extensible parts within your application, of course.  However, any time an object is going to be constructed by code outside of your control, CompositionInitializer allows us to continue to use MEF to satisfy the import requirements of that object. In order to use this from WPF, I’ve ported the code from MEF Preview 9 and Glenn Block’s (now obsolete) PartInitializer port to Windows Presentation Foundation.  There are some subtle changes from the Silverlight port, mainly to handle running in a desktop application context.  The default behavior of my port is to construct an AggregateCatalog containing a DirectoryCatalog set to the location of the entry assembly of the application.  In addition, if an “Extensions” folder exists under the entry assembly’s directory, a second DirectoryCatalog for that folder will be included.  This behavior can be overridden by specifying a CompositionContainer or one or more ComposablePartCatalogs to the System.ComponentModel.Composition.Hosting.CompositionHost static class prior to the first use of CompositionInitializer. Please download CompositionInitializer and CompositionHost for VS 2010 RC, and contact me with any feedback. Composition.Initialization.Desktop.zip Edit on 3/29: Glenn Block has since updated his version of CompositionInitializer (and ExportFactory<T>!), and made it available here: http://cid-f8b2fd72406fb218.skydrive.live.com/self.aspx/blog/Composition.Initialization.Desktop.zip This is a .NET 3.5 solution, and should soon be pushed to CodePlex, and made available on the main MEF site.

    Read the article

< Previous Page | 544 545 546 547 548 549 550 551 552 553 554 555  | Next Page >