Search Results

Search found 40998 results on 1640 pages for 'setup project'.

Page 605/1640 | < Previous Page | 601 602 603 604 605 606 607 608 609 610 611 612  | Next Page >

  • Problem running MVC3 app in IIS 7

    - by mjmoore99
    I am having a problem getting a MVC 3 project running in IIS7 on a computer running Windows 7 Home-64 bit. Here is what I did. Installed IIS 7. Accessed the server and got the IIS welcome page. Created a directory named d:\MySite and copied the MVC application to it. (The MVC app is just the standard app that is created when you create a new MVC3 project in visual studio. It just displays a home page and an account logon page. It runs fine inside the Visual Studio development server and I also copied it out to my hosting site and it works fine there) Started IIS management console. Stopped the default site. Added a new site named "MySite" with a physical directory of "d:\Mysite" Changed the application pool named MySite to use .Net Framework 4.0, Integrated pipeline When I access the site in the browser I get a list of the files in the d:\MySite directory. It is as if IIS is not recognizing the contents of d:\MySite as an MVC application. What do I need to do to resolve this?

    Read the article

  • haproxy + nginx: https trailing slashes redirected to http

    - by user1719907
    I have a setup where HTTP(S) traffic goes from HAProxy to nginx. HAProxy nginx HTTP -----> :80 ----> :9080 HTTPS ----> :443 ----> :9443 I'm having troubles with implicit redirects caused by trailing slashes going from https to http, like this: $ curl -k -I https://www.example.com/subdir HTTP/1.1 301 Moved Permanently Server: nginx/1.2.4 Date: Thu, 04 Oct 2012 12:52:39 GMT Content-Type: text/html Content-Length: 184 Location: http://www.example.com/subdir/ The reason obviously is HAProxy working as SSL unwrapper, and nginx sees only http requests. I've tried setting up the X-Forwarded-Proto to https on HAProxy config, but it does nothing. My nginx setup is as follows: server { listen 127.0.0.1:9443; server_name www.example.com; port_in_redirect off; root /var/www/example; index index.html index.htm; } And the relevant parts from HAProxy config: frontend https-in bind *:443 ssl crt /etc/example.pem prefer-server-ciphers default_backend nginxssl backend nginxssl balance roundrobin option forwardfor reqadd X-Forwarded-Proto:\ https server nginxssl1 127.0.0.1:9443

    Read the article

  • How can I document and automate a system's configuration?

    - by Diomidis Spinellis
    Having a system's configuration represented by its current state is risky, inefficient, and opaque. At some point you may be left with an unsupported system and no upgrade path. Then configuring a new system compatible with the old is a process or trial and error. Furthermore, if at some point the system is damaged the only option is to go back to the most recent full backup, and try to remember what changes followed from that point. Also, the only way to create a system compatible with the original is through a complete dump/restore. Finally, in such a setup there's no way to know how you solved a particular problem; the only thing you can do is to look at the corresponding configuration files and try to guess what you changed to achieve the desired effect. Currently for each system I maintain, I keep a log file where I record all system administration activity, starting from the installation: installation options, added packages, changes in configuration files, updates, problem fixes etc. In theory this allows me to (manually) replay all changes to arrive at the current state, or to unroll an erroneous change by executing the reverse commands. However, this process is also inefficient, error-prone, and relies on human judgment. Another thing I've tried is to put /etc configuration files under version control with git. This helps me document the changes automatically and also apply them on a clean setup. But it's not without problems: git has to run under sudo, passwords and private keys may be stored in the repository, installed packages can't be meaningfully tracked, and git will have a fit if I try to extend this approach to all the system's directories. I've also thought about performing all changes through shell scripts or makefiles, but I think this process will require a lot of effort and will be fragile. Are there some better methods or tools that I'm missing?

    Read the article

  • ATI Radeon 5670 Won't Show Resolutions over 1400x900

    - by Phil Sandler
    Just got my new Dell computer with Windows 7 and an ATI Radeon 5670. I attached it to my current monitor, which is a Samsung 24" (2443bwt). Windows 7 does not allow me to display in resolutions greater than 1400 x 900. The setup through a VGA cable into the VGA port of the card. The card also has a DVI port, but I need to use the VGA port because a KVM that supports VGA only. My old PC (which is Windows XP, GeForce 8600 video) can display in 1900 x 1200 on the same monitor (which is what I want) and even higher. It does this through a vga cable also connected to the KVM (through the DVI port but using an adapter). I have tried the same setup (DVI = VGA adapter) on the new PC and nothing changed. I have tried: Updating the drivers via Windows "Update Driver" (says they are current) Installing the updated version of the drivers from ATI (made no difference) Installing Powerstrip (all the options I would need for a custom resolution are greyed out) Installing the drivers/software from ATI caused the ATI Catalyst Control Center software to stop functioning, so I can no longer even start it. I have found some references to other people having this problem and instructions on cleaning the software off and reinstalling it (as uninstalling normally doesn't solve it). I will try this tonight. In any case, I didn't see any options in CCC that would allow me to override the settings for max resolution. However I didn't tinker with it too much before I tried updating the drivers, so I may have missed a setting. I contacted Samsung via online chat and they say it's a problem with the video card/driver (of course--what else would they say?). Any thoughts on what else I could try?

    Read the article

  • Scripting around the lack of user:password@domain url functionality in jscript/IE

    - by Idiomatic
    I currently have a jscript that runs a php script on a server for me, dead simple. But... I want to be atleast somewhat secure so I setup a login. Now if I use the regular user:password@domain system it won't work (IE decided it was a security issue). And if I let IE just remember the password then it pops up a security message confirming my login every time (which kills the point of the button). So I need a way to make the security message go away. I could lower security settings, which tbh I am fine with but nothing seems to make it fuck off (there might be some registry setting to change). Find a fix for jscript that will let me use a password in the url. There used to be a regedit that worked for older systems which allowed IE to use url passwords (not working on my 64bit windows7 setup) though I doubt that'd have helped jscript anyways (since it outright crashes). Use an app other than IE. Inwhich case I'm not sure how to go about it, I want it to be responsive and invisible so IE was a good choice. It is near instant. Use XMLHttpRequest instead of IE directly? May even be faster but I've no idea if it'd help or just have the same error. Use a completely different approach. Maybe some app that can script website browsing. var args = {}; var objIEA = new ActiveXObject("InternetExplorer.Application"); if( WScript.Arguments.Item(0) == "pause" ){ objIEA.navigate("http://domain/index.html?pause"); } if( WScript.Arguments.Item(0) == "next" ){ objIEA.navigate("http://domain/index.html?next"); } objIEA.visible = false; while(objIEA.readyState != 4) {} objIEA.quit();

    Read the article

  • Server periodically freezing - Help Stabilizing

    - by JonDog
    We run an asp.net/sql server data collection website with a hand full of clients dumping data in and running reports. We moved to a new server (specs below) and have had issues with it freezing and having to reboot it a dozen times over the pass six months. The hosting company has mentioned possible causes (listed below) but cant give a definite answer on what is going wrong. They have offered to reconfigure how ever I like. We have benefited from having a much faster system and really dont want to get rid of the ssd's unless they are the issue. Two possible setup changes that I've talked with them about are also listed below. Any suggestions on what maybe causing the freezing issue as well as suggestion on a new setup would be great. My main questions are: Do SSD generally have problems running the OS & SQL Server on the same RAID Array? and Are the new SSD's still unrefined enough to be running in a production environment? Thanks Current: Xeon Quad Core E3-1270 3.40 Ghz 16 GB DDR3-1333 ECC SDRAM First Hard Drive: 120GB Intel SSD Second Hard Drive: 120GB Intel SSD Third Hard Drive: 120GB Intel SSD Fourth Hard Drive: 120GB Intel SSD SAS 4 Port RAID Card Windows 2012 Standard Edition - 64 Bit MSSQL 2008 Web Edition Possible Causes: Running Sql Server & OS on same RAID Array OS Software Issues Using SSD's CPU Underpowered Not enough RAM Option 1 2x Xeon Quad Core E5-2603 1.80 GHz 16 GB DDR3-1333 ECC SDRAM 1 x 240GB Intel SSD - OS 3 x 1 TB SATA HDD (7200 RPM) - SQL Server SATA 4 Port RAID Card Windows 2012 Standard Edition - 64 Bit Option 2 Dell PowerEdge E3-1270v2 3.5GHz 4 Cores 16 GB DDR3-1600 UDIMM 4 x 128 GB Samsung 840 Pro SSD Add-in H200 (SAS/SATA Controller), 4 Hard Drives - RAID 10 Windows 2012 Standard Edition - 64 Bit

    Read the article

  • How to divert traffic based on hostname using HAProxy?

    - by Bosky
    I've had some initial success with HAProxy setting up a bunch of app servers listening on various other ports. I now have another webserver listening on one port, and i'd like to what changes to make to my config to flow traffic by hostname as well. The following is the current setup, assuming: my apache webserver is running at examplecom:8001 my bunch of app servers 0.0.0.0:8081, 0.0.0.0:8082 , 0.0.0.0:8083 global log 127.0.0.1 local0 log 127.0.0.1 local1 notice maxconn 4096 debug #quiet #user haproxy #group haproxy defaults log global mode http option httplog option dontlognull retries 3 redispatch maxconn 2000 contimeout 5000 clitimeout 50000 srvtimeout 50000 listen appservers 0.0.0.0:80 mode http balance roundrobin option httpclose option forwardfor #option httpchk HEAD /check.txt HTTP/1.0 server inst1 0.0.0.0:8081 cookie server01 check inter 2000 fall 3 server inst2 0.0.0.0:8082 cookie server02 check inter 2000 fall 3 server inst3 0.0.0.0:8083 cookie server01 check inter 2000 fall 3 server inst4 0.0.0.0:8084 cookie server02 check inter 2000 fall 3 capture cookie vgnvisitor= len 32 (any other comments on the ^ setup are welcome.) Now I'd like to continue the same above, but in addition in case - if the hostname is myspecialtopleveldomain<dot>com, then would like to flow traffic to example<dot>com:8001 ~B

    Read the article

  • Trouble with Remote Desktop pulling through printers. Drive Redirection works, and the ports created but not the printers

    - by Windex
    I've run out of things to look into. All the support documents have been gone through and still provide no resolution. I've checked the service permissions, (sc sdshow spooler) they all match up with other systems and what is output on the support documents. I'm nearly positive that the issue can't be permissions anyway as the software requires all users to be an administrator, so all users are a local administrator. (I haven't looked into why yet but its on the list, I was just recently brought into this team and we've put procedures in place for quick recovery.) We've applied hot fixes relating to RDS and printing, though I'm not sure which ones they were. I've combed through group policy and no where is printer redirection disabled. It's setup with all default values regarding the use and redirection of printers and a quick install of W2k8 R2 shows that it works by default. This dev install was joined to the same domain, placed in the same OU, shows the same policies applied, etc, etc, etc, The server generates all the correct redirected ports but no printers are created. It will also redirect drives without issue, this would seem to rule out the usermode service that handles redirects being broken. No events are logged related to any of the events and there are no events from the TerminalServices-Printer source. There were local printers setup. I didn't think it would mattter but as I was running out of ideas I tried deleting them all with no change. The TS was configured for the software it will be running before we checked out the redirection of printers so the other team responsible to setting up new servers wants to find a fix instead of reloading a new server. I'm not sure where or what else to look for. Any ideas?

    Read the article

  • "No such file or directory"?

    - by user1509541
    Ok, so I have a VDS laying around, and I thought I would turn it into a TF2 game server. When I connect to my server through PuTTY, and use wget to download the package "hldsupdatetool.bin" from Steampowered.com. I go to run it and it says "No such file or directory found". When I use "ls" to see what files are in directory, it lists "hldsupdatetool.bin" as being in the directory. So, why is it saying it's not there? This has been a headache for the past 2 days. It's returning: root@10004:~# wget http://www.steampowered.com/download/hldsupdatetool.bin --2012-07-08 06:04:49-- http://www.steampowered.com/download/hldsupdatetool.bin Resolving www.steampowered.com... 208.64.202.68 Connecting to www.steampowered.com|208.64.202.68|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 3513408 (3.4M) [application/octet-stream] Saving to: “hldsupdatetool.bin.3” 100%[======================================>] 3,513,408 2.45M/s in 1.4s 2012-07-08 06:04:51 (2.45 MB/s) - “hldsupdatetool.bin.3” saved [3513408/3513408] root@10004:~# chmod +x hldsupdatetool.bin.3 root@10004:~# ./hldsupdatetool.bin.3 -bash: ./hldsupdatetool.bin.3: No such file or directory root@10004:~# More: root@10004:~# ls ffmpeg-packages hldsupdatetool.bin.1 hldsupdatetool.bin.3 hldsupdatetool.bin hldsupdatetool.bin.2 setup.sh root@10004:~# ls -la total 13828 drwx------ 4 root root 4096 Jul 8 06:04 . drwxr-xr-x 21 root root 4096 Jul 8 05:57 .. -rw------- 1 root root 8799 Jul 8 06:26 .bash_history -rw-r--r-- 1 root root 570 Jan 31 2010 .bashrc -rw-r--r-- 1 root root 4 Jul 2 19:39 .custombuild drwxr-xr-x 2 root root 4096 Jul 4 18:49 ffmpeg-packages ---x--xrwx 1 root root 3513408 Sep 2 2005 hldsupdatetool.bin -rwxr-xr-x 1 root root 3513408 Sep 2 2005 hldsupdatetool.bin.1 -rw-r--r-- 1 root root 3513408 Sep 2 2005 hldsupdatetool.bin.2 -rwxr-xr-x 1 root root 3513408 Sep 2 2005 hldsupdatetool.bin.3 -rw-r--r-- 1 root root 140 Nov 19 2007 .profile -rw------- 1 root root 1024 Jul 2 19:49 .rnd -rwxr-xr-x 1 root root 38866 May 23 22:02 setup.sh drwxr-xr-x 2 root root 4096 Jul 2 19:44 .ssh root@10004:~#

    Read the article

  • Good support to multiple desktops AND multiple monitors in Linux (Ubuntu)?

    - by Somebody still uses you MS-DOS
    I'm starting to have A LOT of opened windows in my machine. Sometimes within a project, I have e-mail/task management/personal e-mail/twitter, and a lot of different opened applications/terminal in my Linux environment. Nowadays I have 4 worspaces: Corporate management (e-mail) and corporate messenger; Work (Documents, Requisites) Dev (Development, All gVim windows, terminal and Firefox for development) Personal (Personal stuff: personal e-mail, delicious, twitter and so on) Sometimes it would be interesting to have different workspaces to projects instead of this configuration I have nowadays that are classes of work (bad name, I know, but I think you got the idea). I'm starting to think about using two monitors: one with Corporate Management, Work and Personal. The second monitor is only the development state: each workspace here is about a project being worked on instead of groups of works like before. A workspace may be implementing different classes for example. My question is: I just want to change to a second monitor using the mouse. I want to still be able to change workspaces in the same monitor using keyboard shortcuts. The keyboard shortcuts wouldn't change monitors, just worskpaces on the same monitor. Does Linux (Ubuntu 10.04 Lucid Lynx) support this envisioned setup? If so, how?

    Read the article

  • Multiple SSL certificates on one server

    - by Kyle O'Brien
    We're hosting two websites on our fairly tiny but dedicated production server. Both website require SSL authentication. So, we have virtualhosts set up for both of them. They both reference their own domain.key, domain.crt and domain.intermediate.crt files. Each CSR and certificate file for each site was setup using its own unique information and nothing is shared between them (other than the server itself) However, which ever site's symbolic link (set up in /etc/apache2/sites-enabled) is reference first, is the site who's certificate is referenced even if we're visiting the second site. So for example, assume our companies are Cadbury and Nestle. We set up both sites with their own certificates but we create Cadbury's symbolic link in apache's site-enabled folder first and then Nestle's. You can visit Nestle perfectly fine but if you check the certificate installation, it reference's Cadbury's certificate. We're hosting these websites on a dedicated Ubuntu 12.04.3 LTS server. Both certificates are provided by Thawte.com. I came across a few potential solutions with no degree of success. I'm hoping someone else has a decent solution? Thanks Edit: The only other solution that seems to have provided success to some people is using SNI with Apache. However, the setups here didn't seem to coincide with our setup at all.

    Read the article

  • IPtables and Remote Desktop with Proxy

    - by Sebastian
    So I setup a windows 2008 web server R2 on VirtualBox. Currently using Bridged Network. I can remote desktop to the machine hosting the VM (10.0.0.183) but cannot remote desktop to the VM itself (10.0.0.195). The remote port on the VM set to 5003. VM setup to accept remote connections (windows side). We also use a proxy for our internet, and I added these rules under NAT. (centOS 5) on our proxy box. -A INPUT -p tcp --dport 3389 -j ACCEPT -A REROUTING -i ppp0 -p tcp --dport 3389 -j REDIRECT --to-port 5003 -A FORWARD -d 10.0.0.195 --dport 5003 -m state --state NEW,ESTABLISHED,RELATED -j ACCEPT I've been trying for hours and hours and just cannot get it to work. I also used freedns so that we can use a domain name to connect too this VM over the internet. (the DNS points to our external IP address). If we don't get this right we will have to purchase a PPoE from an ISP to connect to this VM remotely, but I know that there is an alternative route if I can just get this port forwarding right!

    Read the article

  • How to protect folder privacy against unethical network administrators? [closed]

    - by Trevor Trovalds
    I just need a technical solution for the sake of my group's shared passwords, projects, works, etc. safety. Our network has Active Directory with public/groups/users and NTFS permissions, under a Windows Server 2003 which will soon migrate to Windows Server 2008 R2. Our IT crowd is small, consisting of 2 DBAs, 4 designers, 6 developers (including me), 2 netadmins and (a lot of) tech supporters, everyone has local admin rights. Those 2 network admins weren't the ones who set the network up, they just took the lift recently when the previous ones quit. We usually find them laughing at private contents from users stored in the groups AD, sabotaging documents that don't match their personal tastes and, finally, this week we found out they stole a project we (developers and DBAs) were finishing and, long before, they presented it to the CEO as theirs without us knowing. I'm a systems analyst, and initially my group decided to store critical content, like shared passwords, inside encrypted .zip files. Unfortunately we couldn't do the same to the other hundreds of folders and files, which included the stolen project, because the zipping process would take too long for every update. We also tried an encrypted Subversion repository under SSL, but there are many dummies (~38 atm) involved in the projects that have trouble using TortoiseSVN when contributing, and very oftenly we had to fix messed up updates. Well, I think these two give the idea of what we've been trying to reach. So, is there a practical "individual" protection for our extensive data or my hope can already be euthanized? P.S.: Seriously, at the place where I live/work, political corruption gone the wildest, so denounce related options are likely impracticable. Yet both netadmins have strong "political bond" with the CEO and the President, hence their lousy behavior and our failed delation attempts.

    Read the article

  • FDE / SSD - partition and leave some unencrypted?

    - by Web Design Hero
    Just bought a used beast of a desktop pc. The system drive is setup as a Raid 0 SSD (Intel 510 SSD Drives) with 128 each. I will probably not have to many programs beyond office and maybe Adobe CS if I spring for it, I will be keeping big data on a regular hdd. My question is about setting up TrueCrypt with my configuration. I have not previously done full disk encryption, but I feel that its probably a good idea. I have done some speed tests using file containers on the hdd and the sdd with truecrypt. While there is a huge hit with the SSDs and Truecrypt, it still outperforms the hdd on its own by a good margin, so I think i will be okay for my needs with truecrypt. I have seen in a few places that they recommend partitioning the drive and leavign some of the SSD not inside truecrypt, does this really make a difference? If so, how much should I leave? Will there be any issue in the Raid0 configuration? I am not really concerned about all the wear leveling issue, rather loose data and be secure, but since I don't need all that space neccesarily, I would like to optimize my setup for security and speed.

    Read the article

  • customer wont provide ssh access - ftp only

    - by Max
    Eh, here is my problem: I am working in a webdevelopment agency (thats a problem but not the real problem, read on). Most of the time I choose the live server myself when creating a new website project. But now the customer already has a "server" (10 GB on a cheapo host!) and the "admin" refuses to give me ssh access to it. But I need to access the server via shell because many files will be transported (need to be able to upload and extract a tar) and I need to insert or create mysql dumps via command line. He argues FTP and phpmyadmin should be enough... as far as I know the webspace was just ordered to host the website, so no security critical apps are running there. How can I either convince the admin to give me the ssh login or tell management that we need our own server? Anyone with similiar experiences? This is really annoying as this is a very small project that should be done fast and now one has to fight in order to just get the work done...

    Read the article

  • Working with an external button box

    - by Scott
    I tried this question on Stack Overflow, but I was pointed here, so here goes: For a new project for myself, I am looking for a way to be able to (for example) open a pop-up window on my laptop, by pressing a button on an external device (to be build by myself, or at least bought) connected with USB. Basically I would be looking at something like a Arduino or Raspberry (IF I am looking in the right direction) with buttons on it, and as soon as I hit a button on the external box with physical buttons, a command activates on my laptop and for example opens a popup window in which I can input tekst. Does anyone know: 1) if it is possible to do this at all. 2) What equipment is needed for the external box, what programming is needed. I preffer .net (dot net) but maybe it can only be done with software from the external box. If anyone can point me in the right direction, like make/model of the external box or websites I would be very happy. I have knowledge of Visual Studio/.net but I am willing to learn other languages if .net is not an option for this project. Thanks in advance Scott PS: If anyone knows of some better tags, or at least knows what I mean and needs me to edit the question, please do tell me... I am new on Stack Overflow/Superuser.

    Read the article

  • Windows Server 2008 - Non-Domain users can see my server shares

    - by ManovrareSoft
    Windows Server 2008 - Server Machine Windows 7 Professional - Client Machine I have a domain. It was setup by the client. The shares on the server are restricted correctly when a user logs on to the domain and uses their workstation, I have a few groups setup to restrict some access but the groups are at their core "Domain Users". The problem I am having is that when a user brings in a laptop with Windows 7 Pro on it, they can type up the name of the server in the "Run Dialog" on the start menu like "\SERVERNAME\" and access all of the shares freely... because they are not logged in to the domain there are no restrictions it seems.I have reviewed the permissions on the folders and they all have to be "Domain Users" and I have removed "Everyone" from the list of people able to see it. Guest access is also disabled...What am I doing wrong? Only group in the list is "Domain Users" isn't a domain user a user that is logged in to the domain? How do I stop non-domain users from seeing the shared folder? I noticed this on Windows Server 2003 too at another time. I assume they both had similar security issues and neither were set up by myself so I am not sure what could have been enabled or specifically deactivated that makes this issue appear.

    Read the article

  • Changing farm account in Sharepoint 2010

    - by user55709
    After changing the the farm account to a domain user account I get the following error when trying to access the Central Administration page: "Cannot connect to configuration database" After I realized the headache may not be worth it, I decided to a reinstall using the following SP user account guidelines: http://technet.microsoft.com/en-us/library/ee662513.aspx After getting everything up, I am getting an error when using the designated farm account under the Central Admin Website Manage Service Accounts: "Access denied" If it is the farm administrator, why would I not be able to manage service accounts? I am able to access the other part of the admin site. Also, when logging in with the farm account it lists me as a "system account" not the domain account which I used for log in. Am I missing something or is this normal behavior? Am I not suppose to login with the farm account? When I log in with the Setup account (also a domain account) I can access everything with no errors on the site. The only difference between the two accounts is one has local admin privileges on the Sharepoint farm server which is the setup account. if you notice those privileges are not necessary for the farm account according to the article cited.

    Read the article

  • Methods and practices for managing a network that has no internet connection

    - by FaultyJuggler
    Originally asked in Super User but realized this belongs here. Long story short, I am setting up a network with 32 servers of varying specs that will be used for testing and development. We will be using RedHat Linux, we also do not have a router as of yet and were looking into making one of the servers act as our router/DHCP etc. The small cluster will be on an isolated network with no internet. I can use external harddrives and discs to transfer anything from external sources into machines on the network, so this isn't a locked down secure network, it just won't have a direct connection to the outside world. I've worked on such setups before, but always long after they were setup. So I'm reaching out to see what everyone knows as far as how groups have handled initial setup and maintenance of such a situation. What is the best way to get them all configured and up to date? What are the best ways to automate updates, network wide installs, etc. With the only given that I have large multi-terabyte external hard drives that would be used to drop whatever files are needed onto a central server, how do i then distribute those files and install their contents? I've done perl scripting, some teammates have played with puppet, so we aren't completely in the dark, I just wanted to avoid reinventing the wheel since this is a common challenge.

    Read the article

  • Fedora 17 transparent Ethernet Bridge not forwarding IP traffic

    - by mcdoomington
    I am running on Fedora 17 with the latest ebtables and have been trying to setup a transparent bridge - using the following script, I send a ping through the bridged host and only see the requests on the bridge (among other traffic from eth0), BUT, arps and arp replies are making it through. My host is setup - Client 192.168.1.10 <-- eth0 -- eth2 192.168.1.20 Ethernet script: #!/bin/sh brctl addbr br0; brctl stp br0 on; brctl addif br0 eth0; brctl addif br0 eth2; (ifdown eth0 1>/dev/null 2>&1;); (ifdown eth2 1>/dev/null 2>&1;); ifconfig eth0 0.0.0.0 up; ifconfig eth2 0.0.0.0 up; echo "1" > /proc/sys/net/ipv4/ip_forward; ebtables -P INPUT DROP ebtables -P FORWARD DROP ebtables -P OUTPUT DROP ebtables -A FORWARD -p ipv4 -j ACCEPT ebtables -A FORWARD -p arp -j ACCEPT Any assistance would be great!

    Read the article

  • USB Hub vs. Dockinstation USB vs. Laptop USB

    - by Will
    I recently had thougts about my current setup in my office, especially about the UBS ports distribution. Here's my setup: I have a Lenovo T410 docked to a Lenovo Dockingstation Series 3, that providey me with 6 USB ports, which I use all (3 ext. drives, mouse, keyboard, USB Hub of monitor). The USB hub on my ext. monitor (most probably powered by the ext. monitor's power supply) provides me with 2 USB ports, where I use one for my webcam and another for USB sticks. On my T410 itself I have 4 USB slots, that are usually not used, as don't want to mess with USB plugs when undocking my laptop, now and then I plug my printer on one of these, just because I don't have any UBS ports left. Now I'm wondering how fast each of these slots are: I assume that all the 6 USB ports from the dockinstation somehow go through the docking connector on the bottom of my laptop. Does this connector has such a big bandwidth for all these 6 USB ports to perform like if they were dedicated ports as the 4 ones on my laptop? Also how is generally the performance of USB hubs (like the one on my ext. monitor?)?

    Read the article

  • Changing the name of a binary packaged application and its evoking command

    - by jerkstore
    I have taken the source code of a large project, App A, and made many modifications to it to produce my version, App B. Both App A and App B compile cleanly on Debian and Red Hat and now I would like to build binary packages for both platforms. The last modification I need to make is ensuring App B can be installed alongside App A without any interference. I should be able to evoke both application-a and application-b in the terminal and have both be listed as separate software in whatever desktop environment is present. The projects have a debian/ folder (containing rules, control, etc.) and an rpm/ folder containing a SPEC file. Currently, building and installing the .rpm and .deb packages works except that App B is recognized as App A and therefore does not meet the aforementioned requirements. ldd shows the programs have the same exact dependencies and I am not able to pursue static linking of libraries. What modifications do I need to make to my project to achieve the desired outcome? Please be specific as I do not have much experience with the packaging process.

    Read the article

  • Directory service unavailiable, new hardware same settings

    - by Alex
    I'm working on a project with 2 sites connected by a VPN. Site 1 has the main server and there is a secondary server at site 2 which I am trying to replace. The current setup works perfectly however I can't for the life of me get the replacement server at site 2 up and running. I'm trying to replace like for like just upgraded hardware. I have installed the OS (all Server 2003 Standard SP2) and used exactly the same settings as the old server. I have setup Active Directory, DNS Server, DHCP Server and WINS Server configured. I have used all the same settings as the old server (except IP address and name). I can access the active directory but I can't do anything; add, edit, delete all returns "the directory service is unavaliable". No-one can login on any of the computers on site 2 and the internet is down. Plugging the old server back in and connecting it to the network rectifies the issue (so both new and old are connected at site 2), everyone can login and the internet is back (curious since the modem connects direct to the switch, and even with the new server online I can connect to the router via IP but not the net). I really don't have much experience but I've been roped into doing this because my company is too cheap to hire a real network admin. Any suggestions of where I can start to troubleshoot this, its driving me crazy and I only have a day before all the users are back on site.

    Read the article

  • Setting up a virtual ftp directory that points to another computer

    - by AngryHacker
    I have II5 sitting on an old Windows 2000 Professional box. It has an FTP site there that allows me to access files. It works great, no problem at all. However, now I need to setup a virtual directory that points to a share on another computer on the network (running Windows XP Tablet Edition). The share requires a user name and password. The network is a simple workgroup (i don't have any domains or any of that). What is the correct procedure for that? I've tried setting a share via UNC and typing in the UserID/Password when asked. But when I finished, the virtual machine showed up as an error in the IIS Manager and couldn't access it. I mapped the share onto a drive and then tried to setup a virtual directory with this drive. Same result. Is there something simple I am missing? Would upgrading any part of the picture help at all?

    Read the article

  • Can I have a single solid state drive and a RAID array on the same machine? [closed]

    - by jaminto
    Hi- To summarize, i'm looking to use a single solid state drive as my primary drive, and two conventional sata drives in a RAID 1 configuration for data. I am trying to install 64-bit Windows 7 onto this configuration. Is this possible? Here are the details: I built a desktop that has been running 64-bit Vista on two 500Gb in a RAID 1 array for a few years. I just purchased an Intel X25-M 80Gb Sata Solid-State Drive, and was planning on using this a my primary drive, and keeping the RAID 1 array as my data drive. I added the SSD drive and in the RAID setup, configured it as a RAID 0 array of only one disk. Then, I tried to do a clean install of windows 7 64-bit, but got stuck in the "Missing driver for CD/DVD drive" black hole of selecting driver files and Windows telling me that i don't have the appropriate driver for my hardware. The missing hardware is NOT a CD/DVD drive, since i'm installing off of my only CD/DVD drive. Plus at one point i was able to point it at a driver for my raid controller, and then my hard drives magically showed up as browsable sources for finding drivers for some other unnamed device that setup couldn't recognize. After a few hours of trying drivers (this was a very slow process) i decided to reboot and look at the BIOS settings. I'm using an ASUS M2A-VM motherboard which has an ATI SB600 RAID controller on board. I switched the "On board SATA Type" setting from "SATA" to "AHCI" thinking that since AHCI is an Intel thing, this would help. Unfortunately, this abandoned my RAID configuration, and my previously mirrored drives are showing up as separate drives when i boot into my current windows installation. Am i trying to do the impossible here? Should i just buy a separate SATA/RAID PCI card and plug the SSD into that? Any help would be greatly appreciated.

    Read the article

< Previous Page | 601 602 603 604 605 606 607 608 609 610 611 612  | Next Page >