Search Results

Search found 19928 results on 798 pages for 'multiple constructors'.

Page 654/798 | < Previous Page | 650 651 652 653 654 655 656 657 658 659 660 661  | Next Page >

  • Unnamed, hidden partitions on my 500 GB HD, HP Pavilion dm4 Laptop

    - by emotionull
    I have multiple doubts here. Its a Seagate 500GB 7200RPM HD. I had installed it few months back after my original Laptop HD stopped working. The current drives on my latop, as shown by the Windows Disk Management are: After installing the new HD, I had done a complete clean install of Windows 7 and I didn't create any parition myself, manually. So there are 4 drives. Even previously, before I installed this new HD, my laptop had 4 Partitions. But the there were no un-named partitions like the two in this case. The other two were HP tools and Recovery or something. It was pre-configured, Factory installed Windows. Also, now when I right cick on the unnamed Drives from Disk Management, all the options are greyed out (see image) except the delete partition image. So how do I know what's inside those partitions? Will it be ok if I delete them? I want install Ubuntu and dual boot it with my current windows installation. I cannot do it in current setup as there are already 4 partitions of my HD and if I will try to make a new partition, it will be a logical one (correct me if I am wrong here). So can I delete the un-named, hidden partitions and use them for Ubuntu? A bit unrelated question. As a backup option, can I use the Windows 7's Backup and Restore facility to keep a complete backup of all the drivers and system softwares.

    Read the article

  • Graphics artifacts/distortion with Win7 and nVidia

    - by Gepard
    Problem I encounter is rather hard to describe, so I provide a screenshot: http://dl.dropbox.com/u/1732760/video-distortion.png As you can see there are some horizontal stripes in random colors. These stripes appear sometimes in all windowed apps, games and on the desktop too. They tend to stay in place until I refresh window (or force it to to redraw by for example minimizing and maximizing again). They also tend to appear in the same place and shape multiple times, even if they disappear, it's very likely they will be again in the same place after a while. These artifacts do not blink or change if computer is idling. If I don't touch anything, do not use mouse, they will stay in place forever (unless some app redraws its window on its own). I first encountered this problem some weeks ago. Back then I thought it might be cooling problem, so I took out the graphics card, removed dust from the radiator and fan and put it into PC back. I also ran some stress test using Furmark (peak tempearature was ~65C) to see if the problem becomes more intense if the card gets hotter, but suprisingly no artifacts whatsoever appear during the stress test. Graphic card is Galaxy GeForce 7300 GT with DDR3 memory, was never overclocked. Drivers are the latest, from Nvidia site. OS is Windows 7 64-bit, updated. AMD64 3000+, 2GB RAM. I'm running a dual monitor setup with 2 19'' Samsung LCDs and problem is on both, so I assume it's not a monitor or cable issue.

    Read the article

  • Remote Desktop leaves host unresponsive

    - by Jeff Dalley
    I have my desktop PC at home set up to accept remote connections, and I often connect to it from work on my laptop via mstsc.exe. However, every time I remote to it, I find when I go home that despite the monitor being on - it's not receiving an image and it looks as though the computer is hibernating/asleep. I basically have to restart it whenever I get home and I know there's an answer for why its doing this. More details: When exiting the remote session, I have tried both logging off the account, and closing the RDP window without logging off; both give the same result. When I get home to the desktop I of course try moving the mouse, ctrl+alt+del to see if its responsive to restart, multiple key-press to see if I can get any audio out of it; It seems pretty obvious its sleeping/hibernating in some way: Nothing happens in any of these cases and a physical restart is necessary. Both desktop and laptop are running Windows 7 Ultimate. I'm thinking it really is sleeping/hibernating it, and I'm not sure why because left alone my desktop's power options are set to never turn off the HDD or change its state - I leave it on 24/7. This could be a stupid error on my part but I just can't see it! Thanks.

    Read the article

  • Many clients on a wireless AP for UDP broadcast packets

    - by distorteddisco
    I asked this question on StackOverflow and was directed over here, so I'd appreciate any advice. I'm deploying a smartphone application as part of a live music performance that depends on receiving UDP broadcast packets from a wireless access point. I'm guessing that between 20 and 50 clients will be connected at any one time. I'm aware that a maximum of 20 clients per access point is advised, but as the UDP broadcast packets are ground through the LAN, how would I be able to link multiple APs together? I'm looking for recommendations on a suitable AP for this. The actual data transmission rates are very low - only a few kB/s - as I'm just sending small messages to the smartphone apps, and there will be no WAN internet connection. I tried it with a few connected peers on an adhoc wireless connection without any problems, but ran into dropped packet issues on an old WRT54G running ddwrt, though it's in pretty rough shape. What's the best way to do this? I suppose I could limit concurrent wireless connections to 20 clients... but more would be nice. EDIT: I should also say that it's purely one-way communication; the smartphone application is only receiving broadcast packets, not sending anything.

    Read the article

  • How to set up a server without a hosting control panel

    - by A4J
    I have always used a control panel on my dedicated servers - from cPanel to Plesk to Virtualmin, and I am now considering ditching a CP altogether and manually editing config files. My requirements are fairly simple, I will host multiple sites on the server; some Apache with PHP & Mysql and some Passenger with Rails & Postgres. All will require email smtp/pop. FTP/Stats will not be required. Could someone please give me a quick run-down of what I would need to do - in terms of installing software and configuration? My server will come with a base install of CentOS 6.4 minimal. My thoughts so far: Install/update latest versions of MySQL & Postgres (are they 'safe' out of the box? Or do I need to do anything else like set up root passwords etc?) Install Apache & PHP (again, are the base installs good to go or do they require security tweaks?) Set up nameservers/hostnames/reverse DNS etc (Any guides on how to do this please?) Install Rubygems Install and configure Dovecot and Postfix (any tips on doing this? Or links to how-tos that cover it please?) Set up each website - any links to guides on how to do this? Install/configure firewall (or is the default install good to go?) Any other tips or advice would be greatly appreciated, as would links to guides or how-tos.

    Read the article

  • Security Token for Mac/Linux/Windows, self-managed, pref. open source?

    - by DevelopersDevelopersDevelopers
    I'm looking to buy an evaluation security token (combined smart card/usb reader) for my business that works on: Windows 7 x64 OS X 10.6.x x64 Ubuntu Linux (64 or 32 bit, 10.04 or 10.10, I can bend based on possible tokens) Functionality I need is: Login authentication Authentication for whole-disk encryption (in Linux/Windows, Mac is flexible here) Signing/encryption using PGP and x.509 certificates RSA-2048 key-capable (1024 not good enough.) I can manage the certificates myself Open source middleware/drivers (not necessarily FOSS, just source available. Can flex on this, I just want to be able to audit the code. OpenSC-compatible on Linux would be great.) Is there any token that can do all of this? Or would I need multiple ones to accomplish this? Or do I need to look at smart cards and readers to get this? I have been researching this for a while and have had a heck of a time even getting accurate information about products. Also, I am in the USA, and it appears that EU export laws prevent me from buying from there, so those vendors are out. I was looking at Feitian tokens from Gooze, but since they are in France I can't buy.

    Read the article

  • What is the risk of introducing non standard image machines to a corporate environment

    - by Troy Hunt
    I’m after some feedback from those in the managed desktop or network security space on the risks of introducing machines that are not built on a standard desktop image into a large corporate environment. This particular context relates to the standard corporate image (32 bit Win XP) in a large multi-national not being suitable for a particular segment of users. In short, I’m looking at what hurdles we might come across by proposing the introduction of machines which are built and maintained by a handful of software developers and not based on the corporate desktop image (proposing 64 bit Win 7). I suspect the barriers are primarily around virus definition updates, the rollout of service packs and patches and the compatibility of existing applications with the newer OS. In terms of viruses and software updates, if machines were using common virus protection software with automated updates and using Windows Update for service packs and patches, is there still a viable risk to the corporate environment? For that matter, are large corporate environments normally vulnerable to the introduction of a machine not based on a standard image? I’m trying to get my head around how real the risk of infection and other adverse events are from machines being plugged into the network. There are multiple scenarios outside of just the example above where this might happen (i.e. a vendor plugging in a machine for internet access during a presentation). Would a large corporate network normally be sufficiently hardened against such innocuous activity? I appreciate the theory as to why policies such as standard desktop images exist, I’m just interested in the actual, practical risk and how much a network should be protected by means other than what is managed on individual PCs.

    Read the article

  • Moving from 1 Linux Partition to Many over USB Mount

    - by Mistiry
    We have devices which use Compact Flash for storage. They work OK, but we recently got industrial-grade CF cards to start using. One of the major problems we get is corruption on the flash card. As it is now, these flash cards run Debian with everything in a single partition. We want to have multiple partitions on the new industrial CF cards to help avoid some of the corruption problems. I booted up the device, and attached a USB CF reader. I then used fdisk to partition the CF card in the USB reader. How can I move the data to these partitions so that it works? I have a partition for each of these directories: /lib /var /root /boot /tmp /home /etc / swap space I imagine I can't just use rsync - do I need to attach a second CF reader with a copy of the CF card, so that it's not active and in-use - and then copy from the first reader to the second? How will the system know where to find its files? I know I'd have to change fstab, but that resides in /etc, which will be on a separate partition...how will it find the fstab file if it can't find /etc? And what about grub? I'm at a loss, perhaps its just because I'm under the weather, or I'm just missing a piece of logic here... Any help is greatly appreciated, this is somewhat urgent as our existing stock is nearing its end and we don't want to purchase anything but these industrial cards, but need to get it working with partitions.

    Read the article

  • Enterprise Redirection Services?

    - by Aaron Alton
    This is probably a case of "if I new what it was called, I could google it in 5 minutes" - but I don't know what it's called. It's probably best to explain the requirement using an example. We have a number of services (vpn, owa, etc) which we host from one of our datacenters. We have a number of datacenters, and we technically have the infrastructure already in place to support these services at a number of our datacenters. To provide access to these "services", I would create an external DNS entry (ex. VPN.MyCompany.com Gateway IP for one of my DCs), and clients will connect to it via the DNS entry. Since I have multiple datacenters that can support this service, I could theoretically offer a "highly available, geographically dispersed" solution if I could point this DNS entry to some sort of third party who offers highly available "redirection" services. If my primary site goes down, I could just make a change via some management console and configure the redirector to point to a different DC. Of course, it would be fairly straightforward to set this sort of thing up on one of our servers, but that would kinda defeat the purpose of a highly available third party. Is anyone familiar with a service like this? I'm thinking something like DynDNS, but with Enterprise availability guarantees.

    Read the article

  • Best solution top keep data secure

    - by mrwooster
    What is the simplest and most elegant way of storing a small amount of data in a reasonably secure way? I am not looking for ridiculous levels of advanced encryption (AES-256 is more than enough) and I am only looking to encrypt a small number of files. The files I wish to encrypt are mostly comprised of password lists and SSH keys for servers. Unfortunately it is impossible to keep track of ever changing passwords for my servers (and SSH keys) and so need to keep a list of the passwords. Obviously this list needs to be secure, and also portable (I work from multiple locations). At the moment, I use a 10MB encrypted disk image on my mac (std .dmg AES-256) and just mount it whenever I need access to the data. To my knowledge this is very secure and I am very happy using it. However, the data is not very portable. I would like to be able to access my data from other machines (especially ones running linux), and I am aware that there are quite a few issues trying to mount an encrypted .dmg on linux. An alternative I have considered is to create a tar archive containing the files and use gpg --symmetric to encrypt it, but this is not a very elegant solution as it requires gpg to be installed on every system. So, what over solutions exist, and which ones would you consider to be the most elegant? Ty

    Read the article

  • Laptop keyboard stopped functioning properly

    - by galdikas
    Basically today out of blue my laptop keyboard started acting up. For example I press "s" but I see "`s" on the screen. Some keys don't work at all. And the weird thing is it keeps changing, in a sense that for example in the morning I press "s" and I get "`s", then few hours later "s" works correctly.. but pressing some other key output multiple characters. Then sometimes it spontaneously will start outputting some character onto any input field that is available as soon as I click on it to gain focus. And it will just keep on going and going, as if I keep a button pressed (which I don't). At first I thought this was a virus, but I have a dual boot of Windows 7 and Ubuntu, and I get same problem on both. I even tried to boo live CD, and still had same problem. Anyone had this kind of problem? It's a Toshiba Satellite c660-258. It is around year old, but quite well kept. Never spilled anything on it, or dropped it. And my wireless USB keyboard works perfectly on it (appart from the spontaneous character inputs, which I can stop by hitting NUM LOCK key)

    Read the article

  • Wipe free space on LVM-LUKS (dm-crypt) Volume

    - by peter4887
    My three partitions for my system are created with LVM on a LUKS partition (dm-crypt). These are /home, / and swap. The filesystem is ext4. They are encrypted, because they are on my laptop and I don't want that some laptop thieves get my data. But I often share my laptop with other people so they can access my encrypted partitions. I don't want that these people can recover my cache and all the data I deleted. So I'm now trying to wipe all my free space on /home to prevent against recovering with tools like photorec. (one overwrite should do, the need of multiple overwriting is just a rumor) But still I haven't found any solution to wipe this free space successfully. I tried dd if=/dev/zero of=/home/fillitup bs=512 count=[count of free sectiors] so my partition was complete full of data. df /dev/mapper/home said 100% is used and there are 0 sectors available. But I could still recover gigs of data with photorec, although I selected to recover just form the free space. photorec displays: /dev/mapper/home - 340 GB / 317 GiB (RO) , but df displays that the size of /home is just 313G, why are there these differences and what did the 340GB means? It looks like there is a place on my /dev/mapper/home partition, that I can't access to overwrite, but I can access it to recover. I also checked for corrupted sectors, but there aren't any. Maybe this is the space between my existing files? Did anyone knows why I can't wipe my free space with dd, and how I can find the location of the loads of recoverable files, to securely delete them?

    Read the article

  • Developing high-performance and scalable zend framework website

    - by Daniel
    We are going to develop an ads website like http://www.gumtree.com/ (it will not be like this one but just to give you an ideea) and we are having some issues regarding performance and scalability. We are planning on using Zend Framework for this project but this is all that I'm sure off at this point. I don't think a classic approch like Zend Framework (PHP) + MySQL + Memcache + jQuery (and I would throw Doctrine 2 in there to) will fix result in a high-performance application. I was thinking on making this a RESTful application (with Zend Framework) + NGINX (or maybe MongoDB) + Memcache (or eAccelerator -- I understand this will create problems with scalability on multiple servers) + jQuery, a CDN for static content, a server for images and a scalable server for the requests and the rest. My questions are: - What do you think about my approch? - What solutions would you recommand in terms of servers approch (MySQL, NGINX, MongoDB or pgsql) for a scalable application expected to have a lot of traffic using PHP?...I would be interested in your approch. Note: I'm a Zend Framework developer and don't have to much experience with the servers part (to determin what would be best solution for my scalable application)

    Read the article

  • What needs to be considered when setting up for Linux Development? [closed]

    - by user123586
    I want to set up a box for Linux development. I have a working linux install with the usual toolchain and an IDE. I'm looking for advice on how to approach structuring accounts and folders for development. As the Perl folks say "There's always more than one way to do it." Left to my own devices, I'll come up with several unproductive ways of doing it before figuring out what an experienced Linux programmer would think obvious. I'm not looking for instructions to follow for a specific set of tools or a specific software package. Instead, I'm looking for insight into what decisions need to be made and how to make them, with understanding of the advantages and disadvantages of each individual choice. These are some of the questions that come up: Where to put sources Where to put built object files and libraries where to install what to set in environment variables what compiler flags matter and how do you manage them across several types of builds what configuration entries to make in an IDE how to manage libraries to support multiple environments how to handle different build versions such as debug vs release, or cross platform builds If you are an experienced Linux developer, the answers to these questions may seem trivial and obvious. I'd like to learn to make decisions about these questions that result in as little manual configuration as possible, given some existing sources, a particular IDE, or no IDE at all, a paticular set of development libraries etc. At this point you're probably thinking: Can you be more specific? Sure. But remember that I'm trying to learn how to think about this stuff, not just follow a recipie for a specific set of results. Example: Setup a project that uses CMake for some of its components, autogen.sh followed by configure for others and just configure for a few more: debug builds without an IDE debug builds in NetBeans debug builds in Eclipse debug build in Visual Studio all of the above with release builds for Linux, Mac and Windows. So... **What are your thoughts on an approach that works for all four? Do you have any advice on what to read?**

    Read the article

  • Are my web server permissions for uploading correct?

    - by user1699176
    I'm on debian and I have my website in the directory /srv/www/mysite.com/public_html I set chown for www-data:www-data on /srv/www. I have root disabled and created a sudo user which is id 1000:1000. I would also like to use this user to upload to /srv/www so I added my sudo user to the www-data group. I originally got a message saying that I didn't have permissions to upload a file to that directory. After playing around with multiple permissions for a while I finally was able to upload properly, but I'm not sure if this set up is correct. I'm hesitant to change it for now since it actually works, so I thought I'd ask for advice. I think what I ended up doing was this: sudo chown -R www-data:www-data /srv/www sudo chmod g+s /srv/www sudo usermod -aG www-data myuser sudo chgrp -R www-data /srv/www sudo chmod -R g+w /srv/www When I was finally able to successfully upload a file (with FileZilla) it showed the owner as myuser myuser. Shouldn't it have been www-data myuser? My question is whether this is correct and if there are any potential security issues? For example, I wasn't sure if I was actually supposed to use "myuser" to own the /srv/www directory instead sudo chown -R myuser:myuser /srv/www or maybe sudo chown -R www-data:myuser /srv/www If you need more info, let me know, thanks.

    Read the article

  • Good Hosting Providers With Zend Framework Support [closed]

    - by manyxcxi
    I currently use ixwebhosting for my hosting services. They're cheap and work (most of the time). The databases are horribly slow, the servers are horribly slow, and their support (though usually prompt) is tough to deal with. That being said, they're cheap, I've got like 20 domains hosted in my account, none of them are high volume, and they work JUST good enough- until today. This isn't meant to be a condemnation of ixwh though. Their prices are very low for what they do offer and most things work just fine, most of the time. I need to be able to host web apps written with Zend Framework in a fairly easy fashion. The server performance can't be worse than what I've already had (a pretty low hurdle to clear), and I don't want to spend $30/mo. These are not money making websites- they're projects. My requirements are PHP 5.3, ZF support, MySQL databases, multiple domains- not much. Who should I look at, and who should I look out for?

    Read the article

  • Wifi antenna extension with F-connector/RG-6(RG-59) cable?

    - by rjz2000
    In an older house, the wire mesh in walls surrounding the furnace behave like a Faraday cage and block wifi signals. It is also difficult to lay new cable, however there is television cable to multiple locations due to there once having been a roof-installed, television antenna. It would be relatively trivial to install the wifi router at the center distribution point, then have the antenna broadcasting/receiving the signal plugged in at each of the old television outlets. I assume that it would not be too difficult to find an adapter for SMA <- F-type connectors. The cable is actually RG-59 rather than RG-6, but I assume that it still has relatively good RF isolation along its length, which is no more than a couple hundred feet in any direction. Does anyone know a problem with the idea? Will a router get confused if there is /too little/ interference between the two antenna? Is that length of cable (~100ft) too long for the signal a router broadcasts? I have seen that it is also possible to use old ~$30/each FiOS cable modems available on eBay to extend a network over television cable. However, that seems like a less elegant solution, and might interfere with upnp and dlna services I'd like to have work on a single network. Thanks if anyone has answers or suggestions before I try this project!

    Read the article

  • Apache: Setting up local test server with subdomains

    - by RC
    Hi everyone, I have XAMPP running on my desktop machine, and I do all my work on it with no issue. http://localhost ---> points to public_html http://site1.localhost ---> points to site 1 http://site2.localhost ---> points to site 2 http://site3.localhost ---> points to site 3 Entering the above URLs in my web browser on the machine with Apache works great, and I can work on multiple sites within distinct subdomains. But what I want to do now is to transfer Apache and all the files to another Windows 7 machine within the LAN, but still be able to view the subdomains from my main development machine. With a vanilla XAMPP installation on the new hosting machine, entering the IP address of that machine (e.g. 192.168.1.10) into my development computer would send me to the main public_html folder. But how do I set up subdomains such that I can access it externally? For example, http://site1.devmachine Thanks for any help.

    Read the article

  • Transfer iptables rules to another server (almost) real time

    - by MrShunz
    I'm running 2 cPanel servers with ConfigServer Security & Firewall plugin. One of the functions of the plugin is to block via iptables (temporarily and/or permanently) IPs which fail various authentications (POP3/IMAP, SMTP, FTP, webmail, mod_security and such). Now, i'd like to push those IP blocks to the border router to drop packets as soon as possible (and doing so protecting the other machines on the network). Keep in mind that after N failed logins IP is blocked for 5 minutes, then re-allowed. If multiple bans occours in an hour IP is blocked permanently and should be unlocked "by hand". So I need a near realtime solution. What I'm looking for is a better way than firing some cronjobs both on cPanels and border router to: dump the rules to file transfer the file to border router (via scp/sftp) load the rules from the file in the border router I'm aware that I will need some scripts to parse and modify the rules as cPanels have one ethernet interface and some aliases while border router has two ehternet interfaces and some loopbacks. All machines involved use Linux. EDIT as per @pjmorse comment. The plugin consists of a bunch of perl and config files. The part I'm intrested in is a process which scans logfiles (lfd) and installs iptables rules (and sends an alert email). Fact is, it upgrades quite often (one or two times a week) and itself is 7000 lines of perl so I'm not comfortable on tampering with it.

    Read the article

  • Looking for a powershell script that can pull a file from a set of PC's and FTP

    - by DangeRuss
    I'm looking to write a script (preferably powershell) that will essentially copy a file from a bunch of PC's and FTP it to a server. So the structure of the environment is that we have a file on multiple PC's (around 50 or so) that need to placed on a server. Sometimes one of the PC's may be turned off so the script would first need to ensure the PC is up and running (maybe a ping result), then it would need to go into a directory on that PC, pull a file off of it, rename the file, place into a source directory, then remove the file. Naming convention doesn't matter, but date/time stamp would be easiest. Ideally, it would be best to first move all the files to a source directory to save on FTP bandwidth, but since the files will be named the same, the files must be renamed during the move process. Move not copy because the directory needs to be empty so the file can be re-created the next day. So once moved to the source directory, now all the files need to be FTP'd to a server for processing. After all of this, we need to know which PC's on the list did not respond so we can manually retrieve the file so the script should output a file (txt is fine) that will show which PC's were offline. Everything is one domain and script will be run from an server with admin creds. Thank you!

    Read the article

  • No access to Windows 2003 admin shares

    - by ARomo
    This is the environment: Several Win 2003 SP 2 servers and several Win XP SP2 & SP3 clients. All in the same LAN. Firewall is disabled everywhere. No recent Windows updates or configuration changes. This is the problem: Since last Thursday, I log on to any other server or workstation as any regular (non-admin) user and I fail to be able to open ADMIN SHARES ONLY (namely \\server1\c$, \\server1\e$ and \\server1\admin$). The error message is: "\server1\c$ is not accessible. You might not have permission to use this network resource. Contact the administrator of this server to find out if you have access permissions. Multiple connections to a server or shared resource by the same user, using more than one user name, are not allowed. Disconnect all previous connections to the server or shared resource and try again." I can, however, open the same shares if I use FQDN or IP address: \\server1.domain.local\c$ \\172.0.0.1\c$ Other shares do not have this issue and I can open them without any issue. Any ideas or suggestion would be truly appreciated. Thank you in advance.

    Read the article

  • Server redirect

    - by Tyy
    I have asp.net app XYZ which requires SSL. I am supposed to work with IIS that has only one Web Site - DefaultWebSite - containing multiple apps and virtual directories (3rd party). So, my app is located at domain.com/XYZ. I have to meet these conditions: 1.) requesting DefaultWebSite (domain.com) will run my app. It could redirect to ../XYZ, but I would rather not to. If it has to be done this way, requesting DefaultWebSite or my app over both HTTP and HTTPS will always ends up with redirecting to https://domain.com/XYZ 2.) I can't touch any other apps or virtual directories, can't create additional Web Site, can't set DefaultWebSite to require SSL, ... EDIT: 3.) transmitted data (GET or POST) must be preserved I tried to: set Web Site root directory to my app, but it this caused other apps to crash because of my Web.config (not sure why). set up HTTP redirect on DefaultWebSite to https://domain.com/XYZ. This seems to work correctly, but this doesn't work if user requests my app directly (redirected to domain.com/XYZ/XYZ, or redirect loop). set up Default Document, but this seems to work only if it is located in the Web Site root directory. I know I could write simple .aspx with Response.Redirect, but... is there any better solution? Am I missing something?

    Read the article

  • iptables (DNAT)

    - by user1126425
    I have a host that acts as a gateway for other hosts. The configuration is such that eth0(192.168.1.3) is connected to internet via a router and eth1(172.16.2.50) is connected to internal network via switch. Given that, this host is also running a service that is bound to eth1 and serves the internal network. I want to extend this service to the outside world as well and was trying to manipulate iptables so that any request that comes to this host via eth0 and is directed to 192.168.1.3:80 is send to 172.16.2.50 and internet users can also make use of the service. Here are my iptable rules for setting up the host as gateway (and these work fine): sudo iptables -t nat -A POSTROUTING -s 172.16.2.0/16 -o eth0 -j MASQUERADE sudo iptables -t nat -A POSTROUTING -s 192.168.1.0/24 -o eth1 -j MASQUERADE sudo iptables -A FORWARD -s 172.16.2.0/16 -o eth0 -j ACCEPT sudo iptables -A FORWARD -d 172.16.2.0/16 -m state --state ESTABLISHED,RELATED -i eth0 -j ACCEPT And these are the rules that I am trying to add to the iptables to achieve my ends: sudo iptables -A INPUT -d 192.168.1.3 -p tcp -dport 80 -i eth0 -j ACCEPT sudo iptables -t nat -A PREROUTING -d 192.168.1.3 -p tcp -dport 80 -j DNAT --to-destination 172.16.2.50:80 sudo iptables -t nat -A PREROUTING -s 172.16.2.50 -p tcp -sport 80 -j SNAT --to-source 192.168.1.3:80 sudo iptables -A FORWARD -d 192.168.1.3 -p tcp -dport 80 -m state --state ESTABLISHED,RELATED -j ACCEPT When I do so, I get error like : "multiple -d flags not allowed" ... Can someone tell me how to resolve this error... and do the entries that I want to add will serve my purpose ? Thanks!

    Read the article

  • How and where do you manage your domain names?

    - by Saif Bechan
    In the past several years doing web development I often times needed to buy new domain names. I changed registrars a lot also so over the years I have multiple domain names scattered over different registrars all over the world. Now I want to bring a little structure into my business, and I am at the point that I want to be able to have easy control over my domain names in a convenient way. Does anyone have an idea on what the best way is to give structure on this. I have made some suggestions maybe you can comment on them for me. 1) Just leave it as it is I can leave everything as it is. To make adjustments I have to log into different panels, and for some registrars I have to email the changes. 2) Transfer all the domains to one registar This will cost a lot, about 10 usd per domain name. But if I can find a registar where I have full control over DNS this is worth looking at. Can you give me some comments on how you are doing things now. Maybe also which registrar you prefer on doing things.

    Read the article

  • Driver corruption when deploying Dell Touchpad Drivers (with software) during imaging process

    - by BigHomie
    We're an sccm shop, and use it to deploy Windows. When deploying Dell laptops (multiple models), the touchpad drivers seem install properly, but the software doesn't. The resulting problem is that when the touchpad is pressed on occasion, the mouse pointer will 'jump' to certain points on the screen. A possible symptom of this problem/visible sign is if the touchpad icon isn't in the system tray. The software is in the control panel, but when opened part of the gui is pixelated, indicating botched install maybe? The manual resolution to this, is to go into device manager and uninstall the driver with the option to uninstall all driver software. After a restart, the driver and software is apparently reinstalled, and from there works as expected. Obviously this partially defeats the purpose of a zero touch deployment. If anyone knows why this is and/or a possible workaround, those answers would be valid as well. Barring that, I want to find a way to deploy the driver and touchpad software in an unattended way, so that it can be conditionally installing during the imaging process. To be honest I'm not sure how to troubleshoot this, I suppose I could try drvinst.exe to install the driver, but finding out why this fails initially would keep me from spinning my wheels.

    Read the article

< Previous Page | 650 651 652 653 654 655 656 657 658 659 660 661  | Next Page >