Search Results

Search found 35538 results on 1422 pages for 'access 2007'.

Page 873/1422 | < Previous Page | 869 870 871 872 873 874 875 876 877 878 879 880  | Next Page >

  • How to move 100mb hidden system reserved partition on Windows Server 2008 R2 to other drive?

    - by Artyom Krivokrisenko
    Hello! I have a server with two 1.5TB hard drives. I was going to install a Windows Server 2008 R2 and create software RAID1 using Windows Disk Management Utility. I instaleld Windows, open this console and that is what I see: http://i.imgur.com/KoC9a.png Setup program created a System Reserved Partition at my second HDD. I don't understand now, how can I create RAID1, because space, which supposed to be used for copy of disk C, now is used for this hidden partition. So is there any way now to create correct RAID1? May it is possible to move this partition to the Disk 0, where I have plenty of free space? Unfortunately I can't reinstall Windows and apply other options at the disk management step of the installation, because installation image is not longer connected to the server and I have no physical access to server, only remote desktop.

    Read the article

  • read data on ntfs partition - ubuntu

    - by albert green
    Hi, I had a win xp with an NTFS partion for programs (c:) and I installed ubuntu 10.10 on it. I will use ubuntu from now on. On the disk there was a space for the NTFS partition and free space. I created in the free space a new linux partition. So the new linux partition is a ext3. now from ubuntu I used the disk utility and saw that the windows is marked as free space. I had only one possibility, which is to create a partition, so I did as NTFS. I did NOT format it. I don't care about the windows system, I just need to access the program files folder on that partition and get my chrome bookmarks. I forgot to save them before the installation of linux. do you think it is possible? if so how? thanks.

    Read the article

  • How to create Isolated test environment

    - by Safin09
    Hi All, I am very new to the VMware world. We have VMWare vCenter 4 in the production environment but we have created multiple VLAN through Cisco Switch. I want know, how I can create an isolated test environment for software testing purpose only, so anything will happen in that test vLan will not make trouble in the production environment. "Host-only networking" is the solution or there is a better way to achieve this result? My requirements A. Hosts should be able to access Internet and a Network Share drive but not Production network B. Hosts should connect each other inside the Virtual LAN C. I should be able to take automatic or periodic backup or snapshoot and deploy snapshot when necessary. Whatever your answer is, please give me steps, how to do, if possible. If I need to purchase anything, I am ready to do but I don't want to spend big money. Many thanks in advance.

    Read the article

  • iptables prerouting to redirect source ip address on ethernet

    - by Kevin Campion
    I have 2 ip adresses on the Internet who redirect on the same machine. On this machine, one Debian runs on OpenVZ. I can set iptables rules to redirect all http request to the Debian. iptables prerouting -d ip_address_2 DNAT --to ip_address_local_1 +--------------+ | | | V | ip_address_local_1 I| +------+ +----------+ N|ip_address_1 | |-----|Debian1 VE|-- Apache's log T|-----------------|OpenVZ| +----------+ [client ip_address_1] E| | | | R|ip_address_2 | | | N|--------------+ | | E| +------+ T| Iptables' rules : iptables -t nat -A PREROUTING -p tcp -i eth0 -d ip_address_2 --dport 80 -j DNAT --to ip_address_local_1:80 iptables -A FORWARD -p tcp -i eth0 -o venet0 -d ip_address_local_1 --dport 80 -j ACCEPT iptables -A FORWARD -p tcp -i venet0 -o eth0 -s ip_address_local_1 --sport 80 -j ACCEPT When I go to webpage with "http://ip_address_2", I can see the good content but the ip address on access log file is ip_address_1, I would like to see my ISP's ip address. Any ideas?

    Read the article

  • transient DNS issues with certain sites in Snow Leopard

    - by ceejayoz
    I'm having a rather bizarre DNS issue with Snow Leopard. Certain sites - MSNBC.com being the most noticeable for me - have an odd issue with DNS, in all browsers. After not visiting the site after a while (30 minutes or so), the first attempt to access MSNBC.com results in a DNS error. Refreshing 1-5 times resolves the issue until the next ~30 minute period of inactivity. Seen this on three separate Macs at this point. One from-the-factory Snow Leopard install, two upgrades. Most sites are just fine. I Google and found other reports of the same thing with MSNBC, but no solutions.

    Read the article

  • Port-forwarding on livebox to router

    - by Yusuf
    Hello, At home, I have two routers, one Livebox and a Netgear. The reason why I need the Livebox is that the phone line cannot be connected to the Netgear router. So I have the Livebox connected to the phone line, the Netgear connected to the Livebox, and all PCs connected to the Netgear. My issue is that for every application or port that I want to give external access, I have to create entry in both the Livebox and the Netgear routers; so I would like to know if there's a way to automatically forward all requests to the Netgear router, from which I will then forward to the required IP:port. Thanks in advance.

    Read the article

  • Create Webmin user for an EC2 Instance

    - by Dean
    I've setup an Amazon EC2 Instance, using the Ubuntu 12.04 AMI (ubuntu/images/ebs/ubuntu-precise-12.04-amd64-server-20120424 (ami-a29943cb)), and I'd like to get Webmin working (so I can setup a DNS). After following the installation instructions on Webmin's site, the installer says I can login with any username/pass of a user who has superuser access. The problem is that the EC2 instance only has 1 user "ubuntu", which can only login using SSH keys -- not a password! I've tried creating users manually and I can't login as those users (even via SSH), so I think it might be a permission thing provided by the AMI. Does anyone know the best way around setting up a login to my webmin?

    Read the article

  • Forcing logon to Air Watch server upon joining wifi

    - by DKNUCKLES
    I'm setting up a wireless controller that I would like to leave as unsecured. When a user connects to this network they need to be forwarded to a specific page where they can authenticate with the Air Watch system they have in place. Once authentication takes place, a profile will be downloaded to their device and we can administer the devices accordingly. I'm mulling over how I can force the page to the user when they log in. The methodology I'm thinking about working with is creating a NAT rule for that VSC that would forward all port 80 and 443 traffic to the airwatch server. Once they authenticate, a profile will be downloaded which will connect the devices to an Virtual Access Point who's SSID isn't broadcasted. Is this methodology correct or can someone think of an easier / more efficient way of accomplishing this? The controller is an HP MSM720 for what it's worth.

    Read the article

  • Components needed for VPN

    - by Anriëtte Combrink
    Hi there We eventually got our Mac Mini Server. We now want to set up a small Remote Access VPN using this Mac Mini Server. Firstly we are not sure of the components needed additionally to the server to setup this VPN. We currently have the following: 1 Mac Mini Server 1 Firewall Router (Billion 802.11g ADSL2+ router with VPN capabilities [it says so on the box]) 4Mbps ADSL connection (which should have VPN capability enabled by the service provider, or so we heard) We are not sure what else needs to be included to enable our small VPN. Any advice would be really helpful.

    Read the article

  • Always a path to the internet even in Windows SBS is off

    - by Mark
    Hello all, is it possible to have a configuration in a Windows 2003 SBS environment where in the event that the SBS box crashed/turned off/ or is being worked on that there can still exist a path to the internet for domain users and visitors to still use? I would like to have the standalone router issue DHCP IPs. The primary DNS would point to the SBS, the secondary wouuld point to the ISP DNS Server. My theory was that if someone was using the internet and the SBS box went down they wouldn't be able to access the network shares but still be able to use the internet. (We are moving everything into the clouds with Google Apps Non-Profit) Does this seem like a reasonable configuration? Or are they're pitfalls that I will fall into? Thanks Mark

    Read the article

  • Why do the IP addresses randomly change?

    - by GiddyUpHorsey
    I have a network with the following: Cable modem with static IP address Router Desktop - Win 7 VM Host - VMware ESXi 4.0 A couple of VM Guests - Windows Every now and then my Win 7 PC is unable to access some of the VMs. When I ping the VMs by their domain name their IP address shows up as the IP address of the cable modem. Sometimes I can fix it by running ipconfig /flushdns. The IP address will reset back to what it was supposed to be, but occasionally it wont work. Why does this happen and how can I fix it?

    Read the article

  • DRS: Unknown JNLP Location

    - by Joe
    We are using Deployment Rule Sets to limit access to the older JRE to well-known applications like - but are running into a problem. One business critical applications has the following properties (*s to protect info): title: Enterprise Services Repository location: null jar location: http://app.*.com:52400/rep/repository/*.jar jar version: null isArtifact: true The application downloads a .jnlp file, and uses java web start to execute. Since the location is null, this application cannot be targeted by a location rule. And the certificate hash method only works when the application is cached (being ran more than once). If cache storing is off, which is the case in some situations, how can this application be targeted? Or at least told to run with an older JRE on start? This problem is specifically noted in this bug Thanks!

    Read the article

  • Secure PHP environments with PHP-FPM and SFTP

    - by pdd
    I'd like to set up secure environments for a small number of untrusted PHP websites on a Debian server. Right now everything runs on the same Apache2 with mod_php5 and vsftpd for administrative file access, so there is room for improvement. The idea is to use nginx instead of apache, SFTP through OpenSSH instead of vsftpd and chrooted (in sshd_config), individual users for each website with their own pool of PHP processes. All these users and nginx are part of the same group. Now in theory I can set 700 permissions on all PHP scripts and 750 on static files that nginx has to serve up. Theoretically, if a website is compromised all the other users' data is safe, right? Are there better solutions that require less setup time and memory per website? Cheers

    Read the article

  • How to Remove a VM From Hyper-V Without Deleting the Configuration File?

    - by Steven Murawski
    I'm in the process of moving a number of virtual machines that are homed on shared storage (a file share, though shared cluster disk would work as well) to a new VM host with access to the same shared storage. The new host is a different build version (moving from Windows Server 2012 Beta to Windows Server 2012 RC - though this same process could be used with migrations of Windows Server 2008/2008 R2 to Windows Server 2012 as well), so I cannot migrate the machine with inbox tooling. I need to remove the VM from management of the source Hyper-V host in order to import the VM to the new Hyper-V host. I want to retain the configuration file, so I can import the VM as it stands and not need to reconfigure it. The VHD files are rather large and they are staying on the same file share, so I'd rather not duplicate them during the move process.

    Read the article

  • How to configure CruiseControl.Net for Windows Authentication?

    - by balu
    I am using CruiseControl.Net for continuous integration which is now accessing the dashboard through login plugin, which in turn is authenticating and authorizing after verifying it with a set of users saved as XML file in the CruiseControl.Net server. Now, i need to bring in Windows Authentication to the system whereby which CruiseControl.Net server webdashboard when accessed from a client machine(local machine associated with a common server), would be authenticated and be authorized to access the CruiseControl.Net features based on the authority of the logged in users. Kindly guide me to go ahead with this, appreciate all kinds of resources that would be helpful for achieving this. Thanks.

    Read the article

  • Application Compatibility Clients do not show in MSSQL database, but do show in \AppCompat\

    - by rjt
    Application Compatibility Clients are not denied access to the central MSSQL database, but are able to leave their own files in the \AppCompat\ share. The only computer that shows up in the "Microsoft Application Compatibility Manager" database is the the machine i initially created the .MSI installer from. The MSI successfullly pushed out via GPO and like i said there are tons of file in the \AppCompat\ share from many different computers. But only 1 pc shows up in the "Data Collection Manager" database, so i only have data from one machine. i could manually add all these machines (ADNETBIOSNAME\MACHINENAME221$) to the MSSQL AppCompat db permissions list or use an SQL command to do so in batch, but i suspect i must have missed something. Do you manually edit the MSI to set the credentials?

    Read the article

  • Setting up a chroot sftp on debian server

    - by Kevin Duke
    I'm trying to allow a user "user" to access my server by either sftp or ssh. I want to jail them into a directory with chroot. I read the instructions here however it does not work. I did the following: useradd user modify /etc/ssh/sshd_config and added Match User user ForceCommand internal-sftp ChrootDirectory /home/duke/aa/smart to the bottom of the file changed the subsystem line to Subsystem sftp internal-sftp restarted sshd with /etc/init.d/ssh restart logged in with ssh as user "user" with PuTTY Putty says "Server unexpectly closed the connection". Why is this and how can it be fixed? EDIT Following the suggestions below, I've made the bottom of sshd_config look like: Match User user ChrootDirectory /tmp yet no change. I do get a password OK but I cannot connect via ssh nor sftp. What gives?

    Read the article

  • Customizing post-commit messages in svn for different users

    - by Suresh
    I have an svn repository that users can access (read/write) using their account OR via tunneling over ssh with svnserve. I also have a post-commit hook that sends mails to specific users for different projects via svnnotify: the typical command is svnnotify <params> --to-regex-map <list of email IDs> <regex> For users who have accounts on the system, the notification email is sent from @machine.domain, which is fine. For users coming in via tunnelling, the email gets sent from @machine.domain, which is a fake address since these users don't have an account - the only reason I specify a tunnel-user id is to keep track of who made which update. So my question (finally) is: is there a way to pass a parameter (the "true" email address) to svnserve so that when the post-commit mail is sent, it can be sent "from" the correct email address ? p.s this is my first post here - if I haven't provided sufficient information, apologies: I'm happy to provide more details.

    Read the article

  • Installing nGinX Reverse Proxy on CentOS 5

    - by heavymark
    I'm trying to install nGinX as a reverse proxy on CentOS 5 with apache. The instructions to do this are here: http://wiki.mediatemple.net/w/(dv):Configure_nginx_as_reverse_proxy_web_server Note- in the instructions, for the url to get nginx I'm using the following: http://nginx.org/download/nginx-1.0.10.tar.gz Now here is my problem. After installing the required packages and running .configure I get the following: checking for OS + Linux 2.6.18-028stab094.3 x86_64 checking for C compiler ... found + using GNU C compiler + gcc version: 4.1.2 20080704 (Red Hat 4.1.2-51) checking for gcc -pipe switch ... found checking for gcc builtin atomic operations ... found checking for C99 variadic macros ... found checking for gcc variadic macros ... found checking for unistd.h ... found checking for inttypes.h ... found checking for limits.h ... found checking for sys/filio.h ... not found checking for sys/param.h ... found checking for sys/mount.h ... found checking for sys/statvfs.h ... found checking for crypt.h ... found checking for Linux specific features checking for epoll ... found checking for sendfile() ... found checking for sendfile64() ... found checking for sys/prctl.h ... found checking for prctl(PR_SET_DUMPABLE) ... found checking for sched_setaffinity() ... found checking for crypt_r() ... found checking for sys/vfs.h ... found checking for nobody group ... found checking for poll() ... found checking for /dev/poll ... not found checking for kqueue ... not found checking for crypt() ... not found checking for crypt() in libcrypt ... found checking for F_READAHEAD ... not found checking for posix_fadvise() ... found checking for O_DIRECT ... found checking for F_NOCACHE ... not found checking for directio() ... not found checking for statfs() ... found checking for statvfs() ... found checking for dlopen() ... not found checking for dlopen() in libdl ... found checking for sched_yield() ... found checking for SO_SETFIB ... not found checking for SO_ACCEPTFILTER ... not found checking for TCP_DEFER_ACCEPT ... found checking for accept4() ... not found checking for int size ... 4 bytes checking for long size ... 8 bytes checking for long long size ... 8 bytes checking for void * size ... 8 bytes checking for uint64_t ... found checking for sig_atomic_t ... found checking for sig_atomic_t size ... 4 bytes checking for socklen_t ... found checking for in_addr_t ... found checking for in_port_t ... found checking for rlim_t ... found checking for uintptr_t ... uintptr_t found checking for system endianess ... little endianess checking for size_t size ... 8 bytes checking for off_t size ... 8 bytes checking for time_t size ... 8 bytes checking for setproctitle() ... not found checking for pread() ... found checking for pwrite() ... found checking for sys_nerr ... found checking for localtime_r() ... found checking for posix_memalign() ... found checking for memalign() ... found checking for mmap(MAP_ANON|MAP_SHARED) ... found checking for mmap("/dev/zero", MAP_SHARED) ... found checking for System V shared memory ... found checking for POSIX semaphores ... not found checking for POSIX semaphores in libpthread ... found checking for struct msghdr.msg_control ... found checking for ioctl(FIONBIO) ... found checking for struct tm.tm_gmtoff ... found checking for struct dirent.d_namlen ... not found checking for struct dirent.d_type ... found checking for PCRE library ... found checking for system md library ... not found checking for system md5 library ... not found checking for OpenSSL md5 crypto library ... found checking for sha1 in system md library ... not found checking for OpenSSL sha1 crypto library ... found checking for zlib library ... found creating objs/Makefile Configuration summary + using system PCRE library + OpenSSL library is not used + md5: using system crypto library + sha1: using system crypto library + using system zlib library nginx path prefix: "/usr/local/nginx" nginx binary file: "/usr/local/nginx/sbin/nginx" nginx configuration prefix: "/usr/local/nginx/conf" nginx configuration file: "/usr/local/nginx/conf/nginx.conf" nginx pid file: "/usr/local/nginx/logs/nginx.pid" nginx error log file: "/usr/local/nginx/logs/error.log" nginx http access log file: "/usr/local/nginx/logs/access.log" nginx http client request body temporary files: "client_body_temp" nginx http proxy temporary files: "proxy_temp" nginx http fastcgi temporary files: "fastcgi_temp" nginx http uwsgi temporary files: "uwsgi_temp" nginx http scgi temporary files: "scgi_temp" It says if you get errors to stop and make sure packages are installed. I didn't get errors but as you can see I got several "not founds". Are those considered errors? If so how do I resolve that. And as noted in the link, I cannot install through yum, because it wont work with plesk then. Thanks!

    Read the article

  • ubuntu automount: only mounting drives as root?

    - by glisignoli
    I'm sharing the /mount dir with smb so users on my network can access use drives added to my linux box. Users are able to read files but not write, modify or delete files or directories. I'm using ubuntu 10.04 server edition with halevt installed for usb auto mounting. Afaik halevt is automounting the drives to /media/ but the drives are showing up as: drwxrwxr-x 1 root root 20480 2010-12-29 20:40 disk drwxrwxr-x 1 root root 24576 2010-12-21 17:20 Sparta mount gives me: /dev/sda1 on /boot type ext2 (rw) /dev/sdb1 on /media/disk type fuseblk (rw,nosuid,nodev,sync,allow_other,blksize=4096,default_permissions) /dev/sdc1 on /media/Sparta type fuseblk (rw,nosuid,nodev,sync,allow_other,blksize=4096,default_permissions) When I umount the drives, the folders /media/disk and /media/Sparta are both removed. I tried changing the permissions with chown to nobody:nogroup but it doesn't work (which I assume is because they are ntfs drives).

    Read the article

  • What can I do to prevent my user folder from being tampered with by malicious software?

    - by Tom Wijsman
    Let's assume some things: Back-ups do run every X minutes, yet the things I save should be permanent. There's a firewall and virus scanner in place, yet there happens to be a zero day attack on me. I am using Windows. (Although feel free to append Linux / OS X parts to your answer) Here is the problem Any software can change anything inside my user folder. Tampering with the files could cost me my life, whether it's accessing / modifying or wiping them. So, what I want to ask is: Is there a permission-based way to disallow programs from accessing my files in any way by default? Extending on the previous question, can I ensure certain programs can only access certain folders? Are there other less obtrusive ways than using Comodo? Or can I make Comodo less obtrusive? For example, the solution should be proof against (DO NOT RUN): del /F /S /Q %USERPROFILE%

    Read the article

  • Remote Desktop or Streaming Software/Services that Supports Gaming

    - by Griffin
    I've simply been amazed by the quality and speed of Onlive, as this technology has the potential of making hardware requirements irrelevant to the average user. However, at the moment Onlive is only for remotely controlling video games, and not desktops or other devices in general. I'm in pursuit of software or services that can accomplish this as well as Onlive does. I need: viewer (client) program portability (able to run on a USB stick) DirectX, OpenGL / full-screen game compatibility on the server side.** Gaming-acceptable color/scaling quality and responsiveness. I have a very powerful desktop at home and I want to be able to access this raw power from any other computer that I stick my USB into (in the same way Onlive gives gamers use of their powerful servers) What software/services has most of the above? NOTE: please specify what features your suggestion doesn't have.

    Read the article

  • Deleting an undeletable Directory in Windows 7

    - by Kaizen
    I have encountered a problem from time to time but have not been able to resolve it without formatting. I have a directory called d:\DotNet that I want to delete. I cannot because inside this folder there is another folder called: T4 Code generation and Misc. When I try to deleting or access T4 Code generation and Misc., I get the following error: Could not find this item This is no longer located in D:\DotNet. Verify this item's location and try again. Hopefully this is a simple fix.

    Read the article

  • nginx points the sub-directory of an alias folder to the base directory

    - by Starry
    I am new to Nginx. Now I have a confusion on nginx configurations: My web site contains folders in different locations: location / { root /Path1 } location ^~ /personal { alias /Path2 } When I query http://mysite/personal, I am accessing the content of /Path2 instead of /Path1 Now I want to add a sub-directory in /personal with specific configurations, so I add: location /personal/download { autoindex on; } But I got 404 error when querying http://mysite/personal/download. According to the error log, I am directed to /Path1/personal/download, which is not correct. How can I configure nginx, such that all access to http://mysite/personal/* will be directed to the same directory in /Path2?

    Read the article

  • Finding the A Record for a Home Server [closed]

    - by Ryan Allred
    I have a hosted website that allows me to add subdomains and point them to different locations on the server. I also know that I can change the 'A Record' to point it to a different IP address. Now, here's the question, I have a home server that I need to access though this hosted website's domain name. How would I go about setting up the 'A Record' on the server to point to my home server. I'm usually pretty good about keyword searches but on this one, I'm pretty lost as to what to search for.

    Read the article

< Previous Page | 869 870 871 872 873 874 875 876 877 878 879 880  | Next Page >