Search Results

Search found 42646 results on 1706 pages for 'vbox question'.

Page 223/1706 | < Previous Page | 219 220 221 222 223 224 225 226 227 228 229 230  | Next Page >

  • Should a webserver in the DMZ be allowed to access MSSQL in the LAN?

    - by Allen
    This should be a very basic question and I tried to research it and couldn't find a solid answer. Say you have a web server in the DMZ and a MSSQL server in the LAN. IMO, and what I've always assumed to be correct, is that the web server in the DMZ should be able to access the MSSQL server in the LAN (maybe you'd have to open a port in the firewall, that'd be ok IMO). Our networking guys are now telling us that we can't have any access to the MSSQL server in the LAN from the DMZ. They say that anything in the DMZ should only be accessible FROM the LAN (and web), and that the DMZ should not have access TO the LAN, just as the web does not have access to the LAN. So my question is, who is right? Should the DMZ have access to/from the LAN? Or, should access to the LAN from the DMZ be strictly forbidden. All this assumes a typical DMZ configuration.

    Read the article

  • Is dual-booting an OS more or less secure than running a virtual machine?

    - by Mark
    I run two operating systems on two separate disk partitions on the same physical machine (a modern MacBook Pro). In order to isolate them from each other, I've taken the following steps: Configured /etc/fstab with ro,noauto (read-only, no auto-mount) Fully encrypted each partition with a separate encryption key (committed to memory) Let's assume that a virus infects my first partition unbeknownst to me. I log out of the first partition (which encrypts the volume), and then turn off the machine to clear the RAM. I then un-encrypt and boot into the second partition. Can I be reasonably confident that the virus has not / cannot infect both partitions, or am I playing with fire here? I realize that MBPs don't ship with a TPM, so a boot-loader infection going unnoticed is still a theoretical possibility. However, this risk seems about equal to the risk of the VMWare/VirtualBox Hypervisor being exploited when running a guest OS, especially since the MBP line uses UEFI instead of BIOS. This leads to my question: is the dual-partitioning approach outlined above more or less secure than using a Virtual Machine for isolation of services? Would that change if my computer had a TPM installed? Background: Note that I am of course taking all the usual additional precautions, such as checking for OS software updates daily, not logging in as an Admin user unless absolutely necessary, running real-time antivirus programs on both partitions, running a host-based firewall, monitoring outgoing network connections, etc. My question is really a public check to see if I'm overlooking anything here and try to figure out if my dual-boot scheme actually is more secure than the Virtual Machine route. Most importantly, I'm just looking to learn more about security issues. EDIT #1: As pointed out in the comments, the scenario is a bit on the paranoid side for my particular use-case. But think about people who may be in corporate or government settings and are considering using a Virtual Machine to run services or applications that are considered "high risk". Are they better off using a VM or a dual-boot scenario as I outlined? An answer that effectively weighs any pros/cons to that trade-off is what I'm really looking for in an answer to this post. EDIT #2: This question was partially fueled by debate about whether a Virtual Machine actually protects a host OS at all. Personally, I think it does, but consider this quote from Theo de Raadt on the OpenBSD mailing list: x86 virtualization is about basically placing another nearly full kernel, full of new bugs, on top of a nasty x86 architecture which barely has correct page protection. Then running your operating system on the other side of this brand new pile of shit. You are absolutely deluded, if not stupid, if you think that a worldwide collection of software engineers who can't write operating systems or applications without security holes, can then turn around and suddenly write virtualization layers without security holes. -http://kerneltrap.org/OpenBSD/Virtualization_Security By quoting Theo's argument, I'm not endorsing it. I'm simply pointing out that there are multiple perspectives here, so I'm trying to find out more about the issue.

    Read the article

  • Nt4.0 Printer Driver not compatible with Vista?

    - by PhillC
    I've got a Brother Fax-8360P printer that has a standard printer port so it can be connected to my WindowsXP machine. I've found some drivers on the net that work and it is a pretty decent laser printer. However, when I try and connect up to it from my Vista machine over the network, it tells me that "The printer driver is not compatible with a policy enabled on your computer that blocks NT4.0 drivers". Main question - is it possible to alter this so that my Vista machine will allow me to print via the network. Secondary question - does anyone know of any generic driver that will work instead?

    Read the article

  • Can i enlarge os c drive of my windows 8?

    - by Sorgatz
    Last year I got a new Western Digital WD Blue 500GB HDD to replace my old drive. The first thing I did was to install latest Windows 8. While installing Windows 8 I created 3 partitions, C drive for the OS and others for storage. The OS partition is 120GB (which at the time I thought would be plenty big) but I'm now realizing its too small! I wonder if it's possible to re-size HDD partition without reformatting and re-install my Windows 8. So that is my question, Can i enlarge os c drive of my windows 8 without having to re-format? I've used the Norton Partition Magic and Disk Management to make this happen but there doesn't seem to be any options to make it happen. Thanks for any help you guys can give regarding my question. I've worked hard to optimize my current install of Windows 8 and would hate to start all over again.

    Read the article

  • ICMP - TTL - Trace Route

    - by dbasnett
    I asked this question at Stack Overflow and then thought this may be the better place to ask. Given the following situation: PC --- |aa RTR1 bb| --- |aa RTR2 bb| --- |aa RTR3 bb| etc Each of the |aa rtr bb| is meant to be a router with two ports aa and bb. My question is this. When you do a trace route from PC which router port address should respond with time to live exceeded in transit message? I seem to remember being taught to think of the router as being in as many parts as ports, so that in my scenario when aa is forwarding the packet to bb and decrements the ttl to 0, it will be the address of the aa port in the failure message. I am trying to find the definitive answer. Thanks.

    Read the article

  • Set up internal domain to use external SMTP in Exchange 2007

    - by Geoffrey
    I'm moving to Google Apps and have setup dual-delivery. Everything is fine, but for mail sent internally (from [email protected] to [email protected]), Exchange is not using the send connectors I have pointing to Google's servers. I believe my question is similar to this question: How to force internal email through an smtp connector in exchange 2007 Again, if a user is connected to the Exchange server and tries to send to [email protected] it works just fine, but I cannot seem to force *@mydomain.com to route correctly. This should be a fairly simple, but according to this: google.com/support/forum/p/Google+Apps/thread?tid=30b6ad03baa57289&hl=en (can't post two links due to spam prevention) It does not seem possible. Any ideas?

    Read the article

  • Which browser does my computer use to open a Web page? [on hold]

    - by msh210
    I know little about networking the Internet, but, from what I understand, it works — very approximately — as follows: I, sitting at the computer example.com, send a message saying, roughly, "get http://s.tk" to my ISP, which passes the message along, eventually to the machine at s.tk. The s.tk machine gets "example.comhas sent 'gethttp://s.tk'", so sendssomefileto its ISP which passes the file along, eventually to the machine atexample.com`. When the file gets back to example.com, my computer, how does my computer know what to do with it? I'm sure the headers (or something else) indicate it's a Web page rather than, say, a Usenet post — that's not my question. My question is: how does it know whether to display the Web page in my open Opera window or my open Firefox window, or my other open Firefox window, or, heck, to open a new browser instance?

    Read the article

  • Command-line sort and copy text files to one single file renders an error

    - by user169997
    I stumbled on question 217394 which explains how to copy files sorted alphabetically into one single file. Trying to implement the command myself rendered the following error message: The system cannot find the file specified. The command I am trying to run is here: for /f %i in ('dir /a-d /o-n /b O:\OrdersExport\Order*.txt') do @type %i C:\Users\Admin\Documents\OrderImport.txt The error does not appear if I browsed to the folder in question first: C:\O: C:\cd OrdersImport I simply want one line to copy from destination to that single file. By the way, if it matters, O: is a mapped folder over network

    Read the article

  • Web server (IIS) and database mirroring (Postgresql)

    - by Timka
    Recently our web-server crashed and we had to recover everything from a backup which took the whole day(totally unacceptable in our business). So my question is, how can I create a complete mirror of the server that I can use (switch dns to) in case the same disaster happens in the future? Our main server is on Amazon with Windows 2008/IIS + Postgresql 9.1. I was thinking on creating the same server on a different location as a complete mirror with the database replication. But I'm not sure how to implement IIS instance mirroring over the internet... So my question is, how can I create a complete mirror of the server that I can use (switch dns to) in case the same disaster happens in the future?

    Read the article

  • Data transfer to my own computer from a website host by the same computer

    - by gunbuster363
    Hi all, I have a question about using a web site host in my computer, say Computer A, using any web server hosting application e.g : apache. I connect to my website in my very same computer A, and request to download a file of size 1Mb, in otherwords, I am connecting to my own computer and want to download a file in my computer. In addition, my internet access is bound by a proxy server in a way of gateway. The questions are - does the file transfer really exist? Or is it a local file copying between 2 location? Will my data packet go through the proxy, to the internet, and go back to the proxy and return to me? Thanks everyone who are watching this question.

    Read the article

  • What's the strategy to implement a "knowledge base" in my company.

    - by Oscar Reyes
    In my current work we think we can get benefit from having a knowledge base, so the next time someone has a question/problem etc, that base can be consulted and an answer will show up. Also, it will reduce the risk from having people leaving the company with the knowledge and we would have to start all over again. My question is, what strategy can we follow to implement/buy/get/build/etc this knowledge base? Are there software ready for this? Would it be better to have something build by ourselves ( we have some programmers ) This is an small company ( < 30 ) and the base should be accessible from outside the office ( when the employees are with the customer etc.) so I guess a webapp is in order.

    Read the article

  • Git version control with multiple users

    - by ignatius
    Hello, i am a little bit lost with this issue, let me explain you my problem: I want to setup a git repository, three of four users will contribute, so they need to download the code and shall be able to upload their changes to the server or update their branch with the latest modifications. So, i setup a linux machine, install git, setup the repository, then add the users in order to enable the acces throught ssh. Now my question is, What's next?, the git documentation is a little bit confusing, i.e. when i try from a dummy user account to clone the repository i got: xxx@xxx-desktop:~/Documentos/git/test$ git clone -v ssh://[email protected]/pub.git Initialized empty Git repository in /home/xxx/Documentos/git/test/pub/.git/ [email protected]'s password: fatal: '/pub.git' does not appear to be a git repository fatal: The remote end hung up unexpectedly is that a problem of privileges? need any special configuration? i want to avoid using git-daemon or gitosis, sorry, maybe my question sound silly but git is powerfull but i admit not so user friendly. Thanks Br

    Read the article

  • Why does Django's dev server use port 8000 by default?

    - by kojiro
    (My question isn't really about Django. It's about alternative http ports. I just happen to know Django is a relatively famous application that uses 8000 by default, so it's illustrative.) I have a dev server in the wild that we occasionally need to run multiple httpd services on on different ports. When I needed to stand a third service up and we were already using ports 80 and 8080, I discovered our security team has locked port 8000 access from the Internet. I recognize that port 80 is the standard http port, and 8080 is commonly http_alt, but I'd like to make the case to our security team to open 8000 as well. In order to make that case, I hope the answer to this question can provide me with a reasonable argument for using port 8000 over 8080 in some case. Or was it just a random choice with no meaning?

    Read the article

  • Point dns server to root dns servers [duplicate]

    - by Dhaksh
    This question already has an answer here: What is a glue record? 3 answers Why does DNS work the way it does? 4 answers I have setup a custom authoritative only DNS server using bind9. Its a Master ans Slave method. Assume DNS Servers are: ns1.customdnsserver.com [192.168.91.129] ==> Master ns2.customdnsserver.com [192.168.91.130] ==> Slave Now i will host few shared hosting websites in my own web server. Where i will link above Nameservers to my domains in shared hosting. My Question is: How do i tell root DNS servers about my own authoritative only DNS server? So that when someone queries for domain www.example.com and if the domain's website is hosted in my shared hosting i want root servers to point the query to my own DNS Server so that the www.example.com get resolved for IP address.

    Read the article

  • Maximum number of files in one ext3 directory while still getting acceptable performance?

    - by knorv
    I have an application writing to an ext3 directory which over time has grown to roughly three million files. Needless to say, reading the file listing of this directory is unbearably slow. I don't blame ext3. The proper solution would have been to let the application code write to sub-directories such as ./a/b/c/abc.ext rather than using only ./abc.ext. I'm changing to such a sub-directory structure and my question is simply: roughly how many files should I expect to store in one ext3 directory while still getting acceptable performance? What's your experience? Or in other words; assuming that I need to store three million files in the structure, how many levels deep should the ./a/b/c/abc.ext structure be? Obviously this is a question that cannot be answered exactly, but I'm looking for a ball park estimate.

    Read the article

  • Enable: Asp.net connection pool monitoring with performance monitor

    - by BlackHawkDesign
    If this question is at the wrong forum, be free to tell me. I'm a c# developer, but I'm running in a system management issue here. Intro: Im suspecting that an asp.net application is having some issues with the connection pool and that the pool is flooding from time to time. So to make sure, I want to monitor the connection pool. After some searching I found this article : http://blog.idera.com/sql-server/performance-and-monitoring/ensure-proper-sql-server-connection-pooling-2/ Basicly it explains stuff about connection pools and how you can monitor the application pool with performance monitor. The problem: So I logged in to the asp.net server(The sql database is hosted on a different server) which hosts the website. Started performance monitor. But when I want to select 'Current # pooled and nonpooled connections', I have no instance to select. There fore I can't add it. Question How can I create/supply an instance so I can monitor the connection pool? Thanks in advance BHD

    Read the article

  • Make Google chrome with specific user profile as default browser

    - by Kaushik Gopal
    Is it possible to set Google chrome with a custom user profile as the default browser? When I set google chrome as the default browser, it picks the "default" user profile as against the custom one I have setup. I tried setting google chrome as default browser after opening it from that particular user profile, but it doesn't seem to have an effect. I googled around but could only find another poor soul like myself who asked a similar question here: http://www.google.com/support/forum/p/Chrome/thread?tid=69f0a6e776ceab1c&hl=en There weren't any responses to that question. Cheers.

    Read the article

  • Which OS/distributions have 64-bit kernel and 32-bit userspace? [closed]

    - by osgx
    Which OS (or distributions) comes with 64-bit kernels (x86_64, SPARC64, PPC64, ..smth else?..) and 32-bit userland? I want all small userspace programs (like ls, cat, etc) to be 32-bit, because they really no needs to be 64-bit. But OS kernel must be 64bit for using =3 Gb of RAM. Also database programs (when using a lot of memory) can be 64bit. 64bit mode can hurt some programs, makes them bigger, eating (wasting) memory on pointers (especially in big abstract datatypes like list, tree, etc). 64 bit programss WASTES twice memory on EACH Pointer. I don't want it. And the Question is not "Are the 32-bit programs needed when 64-bit porcessor is available". Question is "What OS comes with 32 bit userspace and kernels in 32/64 bit mode". Examples of such OS includes: Solaris/SPARC64, MACOSX/X86_64 (10.5)/....

    Read the article

  • How do I get excel to close completely after creating a macro in a personal workbook?

    - by Greg B
    I am using Microsoft Excel 2007 and have several macros in my personal.xlsb workbook, which I use often, so it is very convenient that Excel opens them automatically when it starts. What I don't like is when I click on the "X" in the upper right corner of the window Excel does not exit when I close the last visible workbook. I think that this is because personal.xlsb is still open (though hidden). There are several other questions here on Superuser that have people remove personal.xlsb or move it so it doesn't open on startup (question 65297) or change settings to have only one window show in the taskbar (question 86989). (Sorry there are no hyperlinks--apparently I need more reputation to add additional hyperlinks.) I would like to have personal.xlsb open when I open Excel, have each Excel window show in the taskbar but have Excel exit when I click the "X" on the last workbook that isn't personal.xlsb. Any thoughts on how to achieve this?

    Read the article

  • Performance difference between compiled and binary linux distributions/packages

    - by jozko
    I was searching a lot on the internet and couldn't find an exact answer. There are distros like Gentoo (or FreeBSD) which does not come with binaries but only with source code for packages (ports). The majority of distros uses binary backages (debian, etc.). First question: How much speed increase can I expect from compiled package? How much speed increase can I get from real world packages like apache or mysql? i.e. queries per second? Second question: Does binary package means it does not use any CPU instructions that was introduced after first AMD 64bit CPU? With the 32bit packages does it mean that the package will run on 386 and basically does not use most of the modern CPU instructions? Additional info: - I am not talking about desktop, but server environment. - I dont care about compile time - I have more servers, so speed increase more than 15% is worth for using source code packages - Please no flamewars. Thank you very much

    Read the article

  • Arguments passed on by shell to command in Unix

    - by Ryan Brown
    I've been going over this question and I can't for the life of me figure out why the answer is what it is. How many arguments are passed to the command by the shell on this command line:<pig pig -x " " -z -r" " >pig pig pig a. 8 b. 6 c. 5 d. 7 e. 9 The first symbol is supposed to be the symbol for redirected input but the site isn't letting me use it. [Fixed.] I looked at this question and said ok...arguments...not options so 2nd pig, then " ", then -r" ", 4th pig and 5th pig...-z and -x are options, so I count 5. The answer is b. 6. Where is the 6th argument that's being passed on?

    Read the article

  • Amazon RDS Pros/Cons of Multiple DBs per instance

    - by Joe Flowers
    I run two completely independent websites. I am moving their MySQL databases to Amazon RDS. I'm not going to do Multi A/Z deployment - let's remove that variable from this question. I'm not sure whether to create a single RDS instance with two databases, or two Amazon RDS instances with a single database. Ignore cost for the sake of this question. I will not hit the 1 TB data limit so let's ignore that. However, it is extremely important that crashing one of the websites doesn't impact the other. Based on this document - http://docs.amazonwebservices.com/AmazonRDS/latest/UserGuide/Concepts.DBInstance.html I'm assuming that if I write terrible code that crashes one of the databases in a given RDS instance, it could possibly take down the entire RDS instance (and thus inadvertantly affect the other database). Is that correct? Thanks

    Read the article

  • How to change from own Internal/Extrernal DNS to use an outsourced service like DNS Made Easy?

    - by Joakim
    Our current setup is a co-located linux box with an openvz kernel with a handful virtual containers for www, mail etc. and one container run Bind9 with a split views configuration serving External and Internal DNS. The HW-Node runs a shorewall firewall and all containers uses private ip's. The box (and DNS) basically handles web and mail for a handful domains and it works well but we still think it would be a good idea to outsource the public DNS and now to my question... Although I am fairly comfortable with the server stuff and DNS, I'm far from a pro and guess I basically need some confirmation that I am thinking in the right direction in that I basically just move the content of our external view (with zone files) to the external service and keep the internal view (or actually remove the view), update the new external DNS with thier names servers, update the info at my registrar and wait for propagation or have I missed something? Maybe someone else here run something similar already and can share some exteriences? I found this question which at least confirms it can be done.

    Read the article

  • A clear understanding about Mozilla Firefox web applications

    - by Girish Mony
    I have seen lately a concept of installing open web applications in Firefox and Google Chrome just like extensions here . While the site says it as a installable websites. It looks very similar to bookmarks to me. When you click either bookmarks or this installed applications, it opens new tab and the site or application can be viewed. My question is what is the main difference between normal web application like Gmail, or Super user or Facebook and these installable websites? Also what is the advantage in using this installable web apps from normal web applications which we can access by entering url in the browser address bar? I hope this is the right place to post this question. If not please guide me accordingly.

    Read the article

< Previous Page | 219 220 221 222 223 224 225 226 227 228 229 230  | Next Page >