Search Results

Search found 37031 results on 1482 pages for 'ms access'.

Page 505/1482 | < Previous Page | 501 502 503 504 505 506 507 508 509 510 511 512  | Next Page >

  • 403 Forbidden when trying to download file that was uploaded using SSH

    - by Simon Hartcher
    I have FTP access to an Apache server on linux to upload files so that they can be downloadable from the web. I recently was granted SSH access for extra permissions and figured that it would be quicker to download the files directly to the server, instead of downloading them to my machine then FTPing to the server. When I downloaded a file using SSH to the server, and then placed it in the public_html directory, it was not visible from the web. The permissions (from SSH and the FTP client) were the same as all the other files that are visible, but it was not visible in the directory listing, and if I tried to type in the filename into my browser I would get a 403 error. Obviously, when I FTP a file to the server something else happens that makes it web visible, that I am not currently privy to. What am I missing that is causing the file to be invisible from the web?

    Read the article

  • How do I force .htaccess authorization to occur over ssl?

    - by kenja
    I'm trying to force a particular directory to require only allowed IPs and a valid username/password through basic authorization. To ensure that the username/password are sent in encrypted form, I want the directory to also force SSL use. Here is what I have in my .htaccess file: # Force HTTPS-Connection RewriteEngine On RewriteCond %{SERVER_PORT} !^443$ RewriteRule (.*) https://www.mywebsite.com%{REQUEST_URI} [R,L] ## password begin ## AuthName "Restricted Access" AuthUserFile /var/www/admin/.htpasswd AuthType Basic Require valid-user Order deny,allow Deny from all Allow from 79.1.231.151 62.123.134.83 Satisfy All Unfortunately, when I access that directory using http protocol, it is asking for the password before it redirects the page to the secure version. This means the password is sent unencrypted. What am I doing wrong? Is there a way to do this?

    Read the article

  • External USB Fingerprint Reader for Pre-boot Authentication for Dell Laptop

    - by cop1152
    My company just purchased several Dell Latitude E6500 laptops with docking stations and external monitors. These laptops have a fingerprint scanner located next to the keyboard. DOCKED users who prefer to use the included fingerprint scanner for pre-boot authentication are forced to open their laptop in order to access the scanner. This is an inconvenience when the laptop is docked. We are looking for an external, usb fingerprint scanner, that will work with the current preboot authentication setup. I assume that this scanner would have to access the existing credentials for authentication....wherever they are stored. So we would require something that would work PRE-BOOT, use the existing credentials, and not interfere with usage when the machine was not docked, such as when the laptop is being used at home. Does anyone have experience with this scenario? Thanks.

    Read the article

  • how to optimize virtual box shared folders

    - by Nrew
    This is really pissing me off. No matter how much memory I put into the guest os(windows xp). It still hangs for about 365 days before you can access the file you want to access from the shared folder. What do I do to make things faster? Because after it hangs and not respond for 365 days. It will do it again for another 250 days. Ive even set the shared folder to permanent. This is a fairly decent machine: 2.50ghz processor(x64 architecture, but I have only 2Gb of memory so my host os is just 32 bit windows 7) hdd has much space left: 156 Gb free of 250Gb

    Read the article

  • Would it be smarter to setup a Linux development server at home, or to use a hosted server?

    - by markle976
    I am in the process of learning as much as I can about LAMP. I was wondering if I should set a web server on my home network, or use a service like Rackspace (cloud space)? I need to have root access, to be able to access it remotely via SSH/FTP/HTTP, and to be able to install things like subversion, etc. I currently have Comcast so I have plenty of bandwidth, but I am not sure if this would violate the TOS, and/or compromise the security of my home network. Pricing for these cloud hosts, seems reasonable ($11 per month plus about $0.10 per GB of bandwidth), but I am not sure if I will have to control I am looking for.

    Read the article

  • Decrypting Windows XP encrypted files from an old disk

    - by Uri Cohen
    I had an old Windows XP machine with an encrypted directory. When moving to a new Win7 machine I connected the old disk as a slave in the new machine, and hence cannot access the encrypted files. Chances don't seem good as documentation warns you: "Do not Delete or Rename a User's account from which will want to Recover the Encrypted Files. You will not be able to de-crypt the files using the steps outlined above." On the other hand, I have full access to the machine, so maybe there's a utility which can extract the keys and use the to decrypt the files... BTW, I didn't have a password in the old machine, if it's relevant. Ideas, anyone? Thanks!

    Read the article

  • How can I force Parallels' networking to obtain an IP through a wireless router?

    - by RLH
    Here is my setup. I have a Macbook, Thunderbolt display and an Ethernet connection plugged into the Thunderbolt display. During the day, most of my network use can (and should) operate across the ethernet associated with my display. However, I also need to be able to connect up to a wireless router. This hasn't been a problem on the Mac OS X side, but the program that I need to run on the router has to obtain an IP address from the wireless access point. Considering my current setup, how can I leave it so that I can access the internet in OS X, yet have my Window 7 instance running in Parallels, get it's assigned IP address from a wireless router that my Mac is also connected to? I've fiddled around with the Parallel's network settings for an hour, and I can't get Parallel's to see the router, even though my Mac is certainly connected to it.

    Read the article

  • MySQL: "UPDATE command denied to user ''@'localhost'"

    - by Uncle Nerdicus
    For some reason when I installed MySQL on my machine (a Mac running OS X 10.9) the 'root' MySQL account got messed up and I don't have access to it, but I do have access to the standard MySQL account 'sean@localhost' which I use to log into phpMyAdmin. I am trying to reset the 'root' password by starting the mysqld daemon using the command mysqld --skip-grant-tables and then running the following lines in the mysql shell. mysql> UPDATE mysql.user SET Password=PASSWORD('MyNewPass') -> WHERE User='root'; mysql> FLUSH PRIVILEGES; Problem is when I try to run that MySQL string the daemon spits back a ERROR 1142 (42000): UPDATE command denied to user ''@'localhost' for table 'user' as if I didn't use the -u argument when I started the mysql shell, either though I did. Any help is muchly appreciated as I am lost at this point. :/

    Read the article

  • File/folder permissions and groups on Linux with Apache

    - by phobia
    I'm trying to learn about permissions on linux webserver with apache. Some clues to the system: The server I have to play around with is Fedora based. Apache runs as apache:apache. To allow for e.g. php to write to a file the file needs to be chmod 777. 755 is not sufficiant. What I'm wondering is basically how set up permissions like they should be on e.g. a "shared web host". My main problem is that if I set a permission so that one user cannot access anothers home folder, then apache can't read from the public_html folder either. To keep the users out I need to set chmod 700. But to let apache to read I need to have at least execute on world, so a 701 basically works, but won't let some users in. So I'm really stuck on what to do. Have been concidering adding the apache user to the frous grours below to avoid having to add the world execute flag, but is that a bad thing? Should it be the other way around, the users in the groups below should also be in the apache group? I was aiming at having 4 groups: 1. webapp same as dev_int, but is the only one that can go inside the webapp/live folder to e.g. do an update from the repo. 2. dev_int can read,write and execute everything in the "web root", including the two below, but nothing outside of the web root 3. dev_ext can read write and execute in all client folders, but cannot access anything outside of the webapp root 4. clientsBasic ftp accounts. Has a home folder with a public_html, but cannot access any other home folders An example of folder structure: webroot    no users in the aforementioned groups can go outside of here some_project    :dev_int only webapp live    :webapp only staging    :dev_int and :dev_ext clients    :dev_int and :dev_ext client_1    :dev_int, :dev_ext and client1:clients public_html dev developer_1    developer_1:dev_int OR :dev_ext public_html

    Read the article

  • Mimic NTFS "Modify" Permissions on an ext3 acl enabled filesystem in linux?

    - by bobinabottle
    I am migrating our file share from Windows Server to Samba on Linux, and the only hurdle I have at the moment is the acl's. Currently we have a number of directories that use the "Modify" permission on NTFS, so users can write to a directory, but once the file is written it cannot be modified. On Linux, I had the idea that I would set an ACL for the directory to have read/write access, but have a default ACL associated with read only access. Is this possible? I'm not quite sure how to set a default ACL that differs from the parent directory. Thanks!

    Read the article

  • Sonicwall NAT Policy Loopback

    - by John
    I have an issue and am pretty perplexed over it. I have a sonicwall and its setup with NAT polices and reflexive nat for an internal web server. That is, only 2 policies, no loopback policy, and the internal clients can access the web server by public ip no problems. Now, on another connection, another sonicwall, i have the exact same setup for another web server, with exact same policies (obviously different IP's) and the internal clients can't access the internal website by its public IP without creating the loopback policy. Maybe on the first one I've overlooked it, but I don't see any loopback what so ever and its working fine. My question is, does anyone know why the first one works like this but the second one needs the loopback policy? Thanks

    Read the article

  • How to selectively route network traffic through VPN on Mac OSX Leopard?

    - by newtonapple
    I don't want to send all my network traffic down to VPN when I'm connected to my company's network (via VPN) from home. For example, when I'm working from home, I would like to be able to backup my all files to the Time Capsule at home and still be able to access the company's internal network. I'm using Leopard's built-in VPN client. I've tried unchecking "Send all traffic over VPN connection." If I do that I will lose access to my company's internal websites be it via curl or the web browser (though internal IPs are still reachable). It'd be ideal if I can selectively choose a set of IPs or domains to be routed through VPN and keep the rest on my own network. Is this achievable with Leopard's built-in VPN client? If you have any software recommendations, I'd like to hear them as well.

    Read the article

  • Shrew VPN Client gives default route- changing the policy stops me from accessing VPN network

    - by Lock
    I am using the shrew client to connect to what I believe is a Netscreen VPN. Now, when connected, the client adds the VPN as the default route. I do not want this- there is only 1 network behind the VPN that I need to access. I found that with the shrew client, you can change the "Policy" settings on the connection, and can add your own networks in that should tunnel over the VPN. I do this, and add my network in, but when I connect the VPN, I get nothing. Can't access the network. Any idea why this would be? I can see my network in the routing table, and its correctly pointing to the correct gateway. A traceroute shows all time-outs, so I can't be 100% sure that it is trying to tunnel over the VPN. Any idea how I can troubleshoot this?

    Read the article

  • Personal DNS server and fallback outside home

    - by Jens
    I have my own DNS server at home to access local names, and that is working fine. Then I have my laptop, now obviously my laptop leaves the home now and then, therefore it accesses different nets outside my home, and my DNS server is not accessible there... So I figured that I would just add Google as secondary DNS... But actually, when I do that, then suddenly I can't access my local stuff (at home that is, obviously), like my laptop is getting a quicker response from Google's DNS or something, because it can't find anything on the addresses I use locally. If I then remove the secondary DNS, and keeps my own, then it works fine again... So do I somehow need to seperate what DNS's to use on what nets? I already use sepperate DNS settings when I connect using my 3G modem, but when I use hotspots it seems to use the same settings regardless (at least in the train), also can it differ wired connections?... Is there another solution?

    Read the article

  • remote desktop to Fedora 20 with xrdp

    - by 5YrsLaterDBA
    I was able to setup xrdp on my Fedora 13 machine and access it from my Windows 7 machine by follow the steps on the first post on this thread It was simple and easy. But when i try the same on my Fefora 20 machine, things are quite different. There is no error message but some new info like these: # chkconfig --levels 35 xrdp on Note: Forwarding request to 'systemctl enable xrdp.service'. # service xrdp start Redirecting to /bin/systemctl start xrdp.service and then I cannot remote it from my window machine. I also did the following based on the last post of above threa: # yum -y install tigervnc-server Any configuration I should do to make xrdp works for me? I was able to ping each other. EDIT: I can access the shared folder on my Windows machine from my Fedora 20. It seems the problem is on the Fedora side. how to know the service on linux is running? The "service --status-all" cannot give me useful information.

    Read the article

  • Dismissing systray balloons with the keyboard?

    - by rangerchris
    This is probably a supplementary question to how to access the systray using the keyboard... I've read that, and done some googling (google fu lacking, or there really is no answer), but I can't find a nice quick keyboard shortcut to dismiss those info balloons that apps in the system tray choose to display every now and then. The hints for access in the linked question can't be used to close the balloon (and least when I've tried them here). Now I know I can wait for a timeout and they'll go away but if I can just hit , that'd be fanstastic. So... anyone aware of a keyboard shortcut?

    Read the article

  • Intgrating windows 2000 server in a windows 2008 domain [on hold]

    - by user199121
    I have a network enviroment where my windows 2000 server is just acting as fileserver for sharing, so all the users has an account there with username, password and a list of access rights. Now i want to keep this server cause i am running from there an application that 20 users access but also i want to add a new Windows 2008 R2 64 bit server as a domain controller. Is this possible ? 1-It is ok to make the new windows 2008 server a domain controller ? 2- I want all the users accounts to be the same in the domain controller so they can still use the same username and password to login into the domain as well into the windows 2000 server that is setup as a Workgroup. 3- Do i need to do something to the windows 2000 sever to still be functional in the environment so it can be accessed by the clients computers? note:My clients computers are windows 2000 pro, xp pro and windows 7 32/64 bit. Thanks in advance

    Read the article

  • Nested RDP and ILO sessions, latency and keystroke repetition.

    - by ewwhite
    I'm working on a remote server installation entirely through ILO. Due to the software application and environment, my access is restricted to a Windows server that I must access through RDP. Going from that system to the target server is accomplished via HP ILO3. I'm trying to run a CentOS installation in an environment where I can't use a kickstart. I'm doing this via text mode, but the keystrokes are repeating randomly and it's difficult to select the proper installation options. I'm doing this using Microsoft's native RDP client (on Mac and Windows). I've also noticed this before when running installations or doing remote work in nested sessions. Is there a nice fix for this, or it it simply a function of the protocol?

    Read the article

  • Modify .htm files on wireless router

    - by mdeitrick
    What am I trying to do? I am attempting to access the .htm files on my wireless router to modify the look and feel of the Netgear GENIE webpage(s). What have I done? I've read several articles on eHow and Instructables that detail how to setup your router as an FTP server, since I figured the best way to access the files would be through FileZilla. Setting up an FTP server through my router doesn't sound like what I should do to accomplish my task... or perhaps it is? I've also read the documents provided by Netgear for getting started and setting up functionality on the router. Maybe I overlooked something? My specifications Netgear router WNR2000v3 FileZilla v3.8.1 UPDATE: Since someone voted to close this question I'll clarify what I'm asking for... At the present I am dissatisfied with the current UI/webpage look when logging into routerlogin.net. Furthermore I would like to make changes to the admin dashboard.

    Read the article

  • SMB super slow within LAN between MAC and PC

    - by asdcasdc
    I have a windows desktop which stores all my movies, songs and pictures. I have a mac laptop which I would like to access these files. I don't want to utilize the FTP or SCP protocol because I don't want them to be downloaded to my mac. I want to access them as if they are a network mounted disk. So I tried using the native SMB protocol (available in Finder - Go - Connect to server). I tried dragging a file and dropping it onto my MAC's desktop. Surprisingly, I am only able to transfer at a very slow rate of about 1mb/s. Assuming network connectivity is not a problem, has anyone experienced incredible slowness with SMB? Are there alternative protocols for me to use in this case between PC and MAC?

    Read the article

  • How to run a service as a user who can't delete or update or create a file

    - by neeraj
    Mongodb is a web based console to try out Mongodb. I have created something similar to try out nodejs. In nodejs I am accepting user input and then I am performing eval on that command. Given the power of nodejs , someone from web console can create a file, delete files on the system or could execute 'rm -rf '. I was thinking will it be okay if I run node as a user called node. This user node will not have any privilege to write anything, create anything or update anything. The only access this user will have is read access. Will that work or that is too much of risk. What is a good strategy to handle such a situation?

    Read the article

  • Losing windows XP files. How to retrieve them?

    - by ravi
    I have a portable 500 GB HDD.Since last few days, all files of some folder are getting kinda corrupted.I can't access/delete them. Here is the example. If I try to access these files/folders, I get following error. This thing is kinda spreading in my HDD.Till now it has affected two folders worth 70 GB.In which one folder is backup folder where all my important data resides.So I am really in loss if I loose this data. How can I retrieve this data? Please help.

    Read the article

  • nginx + php-fpm cycle redirection error on linode new vps

    - by chifliiiii
    I'm new to nginx, and I'm trying to make my first server run. I followed this guide as I'm trying to use it for a multisite wordpress site. After installing everything, I get a 500 Internal server error. If I check logs, I see this: 012/09/27 08:55:54 [error] 11565#0: *8 rewrite or internal redirection cycle while internally redirecting to "/index.html", client: xxx.xxx.xxx.xxx, server: localhost, request: "GET /favicon.ico HTTP/1.1", host: "www.mydomain.com" 2012/09/27 08:59:32 [error] 11618#0: *1 rewrite or internal redirection cycle while internally redirecting to "/index.html", client: xxx.xxx.xxx.xxx, server: localhost, request: "GET /phpmyadmin HTTP/1.1", host: "www.mydomain.com" My conf files are the following: nano /etc/nginx/sites-available/mydomain.com server { listen 80 default_server; server_name mydomain.com *.mydomain.com; root /srv/www/aciup.com/public; access_log /srv/www/mydomain.com/log/access.log; error_log /srv/www/mydomain.com/log/error.log; location / { index index.php; try_files $uri $uri/ /index.php?$args; } # Add trailing slash to */wp-admin requests. rewrite /wp-admin$ $scheme://$host$uri/ permanent; # Directives to send expires headers and turn off 404 error logging. location ~* \.(js|css|png|jpg|jpeg|gif|ico)$ { expires 24h; log_not_found off; } # this prevents hidden files (beginning with a period) from being served location ~ /\. { access_log off; log_not_found off; deny all; } # Pass uploaded files to wp-includes/ms-files.php. rewrite /files/$ /index.php last; if ($uri !~ wp-content/plugins) { rewrite /files/(.+)$ /wp-includes/ms-files.php?file=$1 last; } # Rewrite multisite '.../wp-.*' and '.../*.php'. if (!-e $request_filename) { rewrite ^/[_0-9a-zA-Z-]+(/wp-.*) $1 last; rewrite ^/[_0-9a-zA-Z-]+.*(/wp-admin/.*\.php)$ $1 last; rewrite ^/[_0-9a-zA-Z-]+(/.*\.php)$ $1 last; } location ~ \.php$ { client_max_body_size 25M; fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; include /etc/nginx/fastcgi_params; } } nano /etc/nginx/nginx.conf user www-data; worker_processes 4; worker_cpu_affinity 0001 0010 0100 1000; error_log /var/log/nginx/error.log; pid /var/run/nginx.pid; events { worker_connections 2048; } http { include /etc/nginx/mime.types; access_log /var/log/nginx/access.log; sendfile on; tcp_nopush on; keepalive_timeout 5; tcp_nodelay on; server_tokens off; gzip on; gzip_types text/plain text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript; gzip_disable "MSIE [1-6]\.(?!.*SV1)"; include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } Any help will be appreciated.

    Read the article

  • Ethernet switch not working

    - by Froskoy
    I've just tried using two different ethernet switches on my network to replace an 8-port Netgear gigabit ethernet switch, which works fine, but doesn't have enough ports for what I need. Computers are connected to a TP-Link TD-8840T router via a switch. They use DHCP for IP address assignment. One switch is a TigerSwitch 6924M, which I'd expect to be difficult to set up, since it is second hand and has an advanced configuration menu, which I can't access without a serial port. However, the second switch that I tried is a new TP-Link TL-SF024, which doesn't appear to have any configuration options, so that can't be the problem. When I say "not working," I mean that although they display that they are connected to a network, they cannot access the internet. For example commands like "ping -c10 google.co.uk" come up with 100% packet loss. What could be causing the problem and how do I fix it?

    Read the article

< Previous Page | 501 502 503 504 505 506 507 508 509 510 511 512  | Next Page >