Search Results

Search found 34016 results on 1361 pages for 'filesystem access'.

Page 305/1361 | < Previous Page | 301 302 303 304 305 306 307 308 309 310 311 312  | Next Page >

  • Website hosted on IIS is not accessbile

    - by Tola Odejayi
    I have two sites set up in IIS on a remote machine RM; one on regular port 80, and the other on port 5773. From my local machine LM, I can access the site on 80, but I cannot access the one on 5773; I get a status code of 502 and an error code of 10060 (A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond) when I try to do this. I can access the 5773 site via IIS when I am logged into RM (i.e. by right clicking on a page on the site and going 'Browse'). I can also access pages on the 5773 site via a browser, again when I am logged into RM. I just can't do the same via a browser when I am logged into LM. I have ensured that port 5773 is open for outgoing traffic on LM. Could the problem be that I also need to ensure that port 5773 is open for inbound traffic on RM?

    Read the article

  • Connection to VPN using command line

    - by plurby
    I've wrote a simple batch file to connect to a specific VPN connection using RASDIAL rasdial MyVPNConnection but it always returned the error 691 Access denied because username and/or password is invalid on the domain. Remote Access Service (RAS) Error Code List then i tried pointing to my Remote Access Phonebook (Rasphone.pbk) and see what happens rasdial MyVPNConnection /phonebook:%userprofile%\AppData\Roaming\Microsoft\Network\Connections\Pbk\rasphone.pbk and still there was an error 691. I've then unchecked the following but still the same problem was reported when executing my batch file.

    Read the article

  • setting upload/space limit for FTP client

    - by tombull89
    As part of my web hosting plan i've also got a few domain hosted for family and friends. I have my own FTP account with full access and I would like to give FTP to a web designer and a user. Is there a way for restricing uploads larger than a certian size, or setting a size limit on the folder that the FTP access has access to? I'd rather not let them upload files ridiciously big. Thanks.

    Read the article

  • How to use second volume devide of amazon EC2

    - by Khoyendra Pande
    I have two volumes of amazon EC2 where by default 1 GiB volume using which has fulled. Now I want to use my second volume which is 9 Gim. I used command cat /proc/partitions I got major minor #blocks name 202 1 1048576 xvda1 202 80 9437184 xvdf Then I hit mkfs.ext3 -F /dev/sdf its showing mkfs.ext3: No such file or directory while trying to determine filesystem size then I hit command df and I got Filesystem 1K-blocks Used Available Use% Mounted on /dev/xvda1 1032088 1031280 0 100% / tmpfs 313160 8 313152 1% /lib/init/rw udev 297800 24 297776 1% /dev tmpfs 313160 4 313156 1% /dev/shm overflow 1024 32 992 4% /tmp means still I am unable to use my 9 GiB space Volume. I am conform I have two volume where attachment information is i-7e4fb41c:/dev/sda1 (attached) and i-7e4fb41c:/dev/sdf (attached) where only sda1 is using. Any one know how may I use my second volume(sdf). Thx

    Read the article

  • How to use UMLFS?

    - by Vi
    I'm trying to mount what is inside UML session as FUSE filesystem on host. There's "uml_mount" program which looks like a thing for this purpose, but it fails. What is UMLFS (I haven't found any documentation at all) and how to mount it? uml_mount mounts FUSE filesystem and starts uml_mconsole <umid> umlfs <file descriptor> which tries to send this file descriptor to UML kernel (to deal with further FUSE things), but sending fails. Also I haven't found any signs of FUSE inside a kernel. Do I need some special patch for this?

    Read the article

  • OpenVZ multiple networks on CTs

    - by user6733
    I have Hardware Node (HN) which has 2 physical interfaces (eth0, eth1). I'm playing with OpenVZ and want to let my containers (CTs) have access to both of those interfaces. I'm using basic configuration - venet. CTs are fine to access eth0 (public interface). But I can't get CTs to get access to eth1 (private network). I tried: # on HN vzctl set 101 --ipadd 192.168.1.101 --save vzctl enter 101 ping 192.168.1.2 # no response here ifconfig # on CT returns lo (127.0.0.1), venet0 (127.0.0.1), venet0:0 (95.168.xxx.xxx), venet0:1 (192.168.1.101) I believe that the main problem is that all packets flows through eth0 on HN (figured out using tcpdump). So the problem might be in routes on HN. Or is my logic here all wrong? I just need access to both interfaces (networks) on HN from CTs. Nothing complicated.

    Read the article

  • What's the best platform to publish documentation to internal users?

    - by serialhobbyist
    My team has a need to publish documentation internally. At the moment, it's spread all over the place and this means we often have to search everywhere to find something. We'd like to publish everything in one place. The main thing that stops us is access control - the wikis in place don't belong to us and we can't do it. What is the best tool for publishing docs, ideally fitting these requirements: web front end - readers access docs using browser single place to put docs access control by individual doc or by sets of docs (folders, branch of 'site', ...) if you don't have access to a doc, you don't see the link to that page/doc/folder. either built-in editor or something my users are familiar with (e.g. Word) built-in version control would be nice Also, can you think of other criteria I should've specified?

    Read the article

  • Is Software Raid1 Using mdadm with a Local Hard Disk and GNDB Possible?

    - by Travis
    I have multiple webservers which use many small files to created dynamic web pages. Caching the web pages isn't an option. The webserver also performs writes so I need a synchronous filesystem. I'm looking to maximise performance as it's my understanding that small files is the weakness (to varying degreess) of a cluster filesystem over ethernet. Currently I'm using Centos 5.5, 64 bit. Since it's only about 300MB of data, I'm looking at mdadm using RAID-1 with the GNBD and a local hard disk using the "--write-mostly" option so the reads are done using the local hard disk. Is this possible? If so, is there any advantage to making it a tmpfs disk instead of a local hard disk? Or will the files on the local hard disk just get cached in RAM anyway so I won't see a performance gain by using tmpfs, assuming there's enough RAM available?

    Read the article

  • Unable to run cvlc in a script

    - by VxJasonxV
    I'm creating a script that issues a few curl commands in order to access a time-protected mms stream link, then set up a relay using cvlc (vlc's command line interface) for my own use on an unencumbered player. The curl aspect of this is working, as I can run as a browser and curl side by side and get the same access url. (It's time locked meaning the stream will work forever, but you have to connect quickly or the URL will time out.) The very end of the script prints the command I will run, which is then followed up by "exec $CMD". When I echo $CMD I get: cvlc --sout '#standard{access=http,mux=asf,dst=0.0.0.0:58194}' mms://[...] But the cvlc execution output says: [0x9743d0] main interface error: no suitable interface module [0x962120] main libvlc error: interface "globalhotkeys,none" initialization failed [0x9743d0] dummy interface: using the dummy interface module... [0xb16e30] stream_out_standard stream out error: no mux specified or found by extension [0xb16ad0] main stream output error: stream chain failed for `standard{mux="",access="",dst="'#standard{access=http,mux=asf,dst=0.0.0.0:58194}'"}' [0xb11cd0] main input error: cannot start stream output instance, aborting [0xb11f70] signals interface error: Caught Interrupt signal, exiting... Why is it ignoring my --sout input?

    Read the article

  • Retrieving a specific value from "df -h" using shell

    - by Diego Dias
    When I use df -h, I get the following output: Filesystem Size Used Avail Use% Mounted on /dev/mapper/VolGroup00-LogVol00 59G 2.2G 54G 4% / /dev/sda1 122M 38M 78M 33% /boot tmpfs 1.1G 0 1.1G 0% /dev/shm 10.10.0.105:/somepath 11T 8.4T 2.1T 81% /storage4 10.11.0.101:/somepath 15T 8.9T 5.9T 61% /storage1 /dev/mapper/patha 5.0T 255G 4.8T 5% /storage5_vol0 /dev/mapper/pathb 5.0T 195G 4.9T 4% /storage5_vol1 /dev/mapper/pathc 5.0T 608G 4.5T 12% /storage5_vol2 I want to write a script that gets the value of Avail column on a specific storage. I used to use df -k /storage_name | tail -1 | awk '{print $3}' But the FileSystem column can have a value or not .. which would change the variable of my script from $3 to $4. How can I get the Avail on a single command line even if there are no values on the previous columns?

    Read the article

  • Large file copy from NFS to local disk performance drop

    - by Bernhard
    I'm trying to copy a 200GB file from an NFS mount to a local disk. The local disk is an XFS filesystem on a LVM on top of a RAID 5 system (hardware RAID controler). I'm using rsync to monitor the transfer speed. At the beginning the IO speed is about 200MB/s, stable for the first 18GB. But then the performance drops by a factor of 10-20 and never recovers to the initial rate. Sometimes it reaches about 50-100MB/s but just for a few seconds and then the process seems to hang for a bit. At the same time all file-stat operations on the target filesystem are blocking for a long time (minutes). Also interrupting the copy process blocks for several minutes, a sub-sequent delete of the partly copied file takes also several minutes. Any ideas what could be causing this?

    Read the article

  • Our GoDaddy web server is drowning in temp files!!

    - by temp file guy
    We have a virtual dedicated server with a fairly large amount of traffic. We use GoDaddy using CPanel. We have 10GIG of space of which about 80% is not our content but logs and server utilities. Godaddy support is evasive and they are trying to encourage us to migrate to new service with 15GIG. Reviewing the large files we found the following: We have a ton old TMP files at this directory. /public_html/files/TMP/FILE_PERSISTANCE_PROVIDER: (no access) some large files in these directories. /usr/local/apache/logs/ - suphp_log (220M) - access_log (7M) - error_log (5M) /usr/local/apache/domlogs/ (no access) /usr/local/cpanel/ (no access) /usr/local/cpanel-rollback /tmp Questions: What can we safely delete or truncate? How can we change permissions on files with no access to delete? Is there utility to monitor and clean up temp files Other files/programs that we can delete? thanks!

    Read the article

  • How to tell if linux disk IO is causing excessive (> 1 second) application stalls

    - by noahz
    I have a Java application performing a large volume (hundreds of MB) of continuous output (streaming plain text) to about a dozen files a ext3 SAN filesystem. Occasionally, this application pauses for several seconds at a time. I suspect that something related to ext3 vsfs (Veritas Filesystem) functionality (and/or how it interacts with the OS) is the culprit. What steps can I take to confirm or refute this theory? I am aware of iostat and /proc/diskstats as starting points. Revised title to de-emphasize journaling and emphasize "stalls" I have done some googling and found at least one article that seems to describe behavior like I am observing: Solving the ext3 latency problem Additional Information Red Hat Enterprise Linux Server release 5.3 (Tikanga) Kernel: 2.6.18-194.32.1.el5 Primary application disk is fiber-channel SAN: lspci | grep -i fibre 14:00.0 Fibre Channel: Emulex Corporation Saturn-X: LightPulse Fibre Channel Host Adapter (rev 03) Mount info: type vxfs (rw,tmplog,largefiles,mincache=tmpcache,ioerror=mwdisable) 0 0 cat /sys/block/VxVM123456/queue/scheduler noop anticipatory [deadline] cfq

    Read the article

  • maintaining redirects in nginx from an external source

    - by Sascha
    I am in the situation to give our marketing department the opportunity to maintain their redirects by their own. Until now, they passed the information to the IT department and we maintained it for them in nginx.conf. Some of these guys are quite familiar with redirections in IIS or even in apache but it is no option to give them direct access to the nginx configuration. I see, that there is no nginx support for .htaccess files which I could give access to and I would also prefer not to grant write access to an conf-file that nginx includes. I expect, that our marketing will break our nginx setup within hours... Is there a secure possibility without giving them access our the heart of our load balancer?

    Read the article

  • How to get a maximum file size of VZFS parition?

    - by Nulldevice
    I have a VPS hosting with a VZFS file system. How can I determine maximum file size of VZFS partition? UPD: Free space (or total space) is not what i need. Sometimes file cannot occupy a hole partition volume - fat16 with 2Gb limit is a good example. I need to use a large database file (say, 64Gb) and so I need to know if a file system of VPS hosting will cope with it. It is easy to calculate for an ext3 filesystem using tune2fs, but VPS uses VSFS by Virtuozzo, and it is documented bad. Is it any generic way to calculate maximum file size for some filesystem in linux?

    Read the article

  • Nginx deny doesn't work for folder files

    - by user195191
    I'm trying to restrict access to my site to allow only specific IPs and I've got the following problem: when I access www.example.com deny works perfectly, but when I try to access www.example.com/index.php it returns "Access denied" page AND php file is downloaded directly in browser without processing. I do want to deny access to all the files on the website for all IPs but mine. How should I do that? Here's the config I have: server { listen 80; server_name example.com; root /var/www/example; location / { index index.html index.php; ## Allow a static html file to be shown first try_files $uri $uri/ @handler; ## If missing pass the URI to front handler expires 30d; ## Assume all files are cachable allow my.public.ip; deny all; } location @handler { ## Common front handler rewrite / /index.php; } location ~ .php/ { ## Forward paths like /js/index.php/x.js to relevant handler rewrite ^(.*.php)/ $1 last; } location ~ .php$ { ## Execute PHP scripts if (!-e $request_filename) { rewrite / /index.php last; } ## Catch 404s that try_files miss expires off; ## Do not cache dynamic content fastcgi_pass 127.0.0.1:9001; fastcgi_param HTTPS $fastcgi_https; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; ## See /etc/nginx/fastcgi_params } }

    Read the article

  • Git: push via ssh to a root owned repository with ssh root logins disabled

    - by anthonysomerset
    is that even possible? Summary, i'm running puppet master on a server and ideally we want root logins via ssh disabled, we want to force all access via sudo if root access required however we have puppet installed using a git repo to manage the manifests, this repo is currently owned by root and currently i only know of 2 solutions (less ideal) allow root access via key auth only - if so, what can i lock it down to to only allow the git push commands? own the repo in /etc/puppet as a different owner - will puppet work reliably with this?

    Read the article

  • mysql single database relocation

    - by asdmin
    I would like to know if it's possible to operate different databases on different filesystem locations. Background: we are a hosting service, which hosts mysql, web, and smtp to it's customer, but all our services (sql, smtp, http) are located in a different place. We are going to assign a single logical volume to a customer, which will accommodate the customer's mailing, weppages and (hopefully) sql database. Web pages and mailing are already covered, but I am not able to find a configuration setting which would enable me to specify the location of a database (the directory where mysql stores the DB). Let me please highlight, the target here is to relocate different databases to different locations in the filesystem, not moving them from a single place to an another (single) place. Also please do not bother answering with soft and hard symbolic links. ;) Thanks

    Read the article

  • Port(s) not forwarding?

    - by user11189
    I have cable internet service through Charter Communications and feed two desktop computers through a Linksys RP614v3 router. One system is my wife's running WinXP Home Edition and the other is mine, running Vista Home Premium (sp1). I have port forwarding configured in the Linksys so I can access the Vista system remotely using TightVNC. Initially, it worked great and I was able to remotely tend email and access local files while out of town for work. Lately, the cable internet service appears to flicker intermittently and upon return, my Mailwasher program loses ability to access the net and I've been unable to make the remote connection. When I reset the port forwarded for email in the router control panel, Mailwasher functionality returns but as I'm home when that happens, I have no easy way to check remote access until the next time I'm on the road or at work. I'm at my wit's end -- the TightVNC client accesses fine from my wife's system from behind the modem/router setup but I don't know how to maintain whatever gets reset when I fiddle with the control panel and the need to do so at all is new. I accessed it fine for a week off and on while out of town a month ago and now I can't leave home and access it from work an hour later.

    Read the article

  • Multi- authentication scenario for a public internet service using Kerberos

    - by StrangeLoop
    I have a public web server which has users coming from internet (via HTTPS) and from a corporate intranet. I wish to use Kerberos authentication for the intranet users so that they would be automatically logged in the web application without the need to provide any login/password (assuming they are already logged to the Windows domain). For the users coming from internet I want to provide traditional basic/form- based authentication. User/password data for these users would be stored internally in a database used by the application. Web application will be configured to use Kerberos authentication for users coming from specific intranet ip networks and basic/form- based authentication will be used for the rest of the users. From a security perspective, are there some risks involved in this kind of setup or is this a generally accepted solution? My understanding is that server doesn't need access to KDC (see Kerberos authentication, service host and access to KDC) and it can be completely isolated from AD and corporate intranet. The server has a keytab file stored locally that is used to decrypt tickets sent by the users coming from intranet. The tickets only contain username and domain of the incoming user. Server never sees the passwords of authenticated users. If the server would be hacked and the keytab file compromised, it would mean that attacker could forge tickets for any domain user and get access to the web application as any user. But typically this is the case anyway if hacker gains access to the keytab file on the local filesystem. The encryption key contained in the keytab file is based on the service account password in AD and is in hashed form, I guess it is very difficult to brute force this password if strong Kerberos encryption like AES-256-SHA1 is used. As the server has no network access to intranet, even the compromised service account couldn't be directly used for anything.

    Read the article

  • Which Revision Control Software to use for Personal Dropbox?

    - by wag2639
    I want to set up a sync repositiory that would be similar to Dropbox. Goals/Requirements: Free (Open Source very preferable) Linux host (probably Ubuntu) Windows/Mac/Linux clients Potential for multiple users with limited access (optional) Preferable easy, doesn't necessarily need to be automatic Revision control very preferable Basically, I want to be able to use multiple computers, possible with different OS's, and be able to access, use, and sync files across all of them. I also want to have a local copy of the repository for when I'm not connected to the network (as if I'm working on a laptop, I want to keep a local repository to keep revision and merge later with "master" repository). For example, I'm editing a few pictures on my laptop during the day outside of my network, but when I get home, I would like to sync the changes, including incremental changes, with my desktop at home. I would also like my roommates to be able to access and use this repository too but limit access to certain files. For example, I may want to use this to backup financial records but wouldn't want them to have access to those files. I'm a programmer and familiar with SVN but I know that wouldn't be the most appropriate since it doesn't handle binaries well and doesn't keep a local repository. I know better choices exist but I don't really know them well enough to choose the best one.

    Read the article

  • i[Pod|Phone|Pad|*] backups in iTunes

    - by Maroloccio
    iTunes <- iPhone. At sync time, a back-up is performed. Which data is included, which data is not? i.e. are songs (potentially redundant) backed-up so that a computer ends up having both the source file on the filesystem and the copy within the device back-up? Is anything on the iPhone filesystem not backed up? (i.e. on a Mac using Time Machine, some files are excluded from the back-up even if not all of them can be recreated upon restore - I lost my postfix config this way..)

    Read the article

  • linux disk usage report inconsistancy after removing file. cpanel inaccurate disk usage report

    - by brando
    relevant software: Red Hat Enterprise Linux Server release 6.3 (Santiago) cpanel installed 11.34.0 (build 7) background and problem: I was getting a disk usage warning (via cpanel) because /var seemed to be filling up on my server. The assumption would be that there was a log file growing too large and filling up the partition. I recently removed a large log file and changed my syslog config to rotate the log files more regularly. I removed something like /var/log/somefile and edited /etc/rsyslog.conf. This is the reason I was suspicious of the disk usage report warning issued by cpanel that I was getting because it didn't seem right. This is what df was reporting for the partitions: $ [/var]# df -h Filesystem Size Used Avail Use% Mounted on /dev/sda2 9.9G 511M 8.9G 6% / tmpfs 5.9G 0 5.9G 0% /dev/shm /dev/sda1 99M 53M 42M 56% /boot /dev/sda8 883G 384G 455G 46% /home /dev/sdb1 9.9G 151M 9.3G 2% /tmp /dev/sda3 9.9G 7.8G 1.6G 84% /usr /dev/sda5 9.9G 9.3G 108M 99% /var This is what du was reporting for /var mount point: $ [/var]# du -sh 528M . clearly something funky was going on. I had a similar kind of reporting inconsistency in the past and I restarted the server and df reporting seemed to be correct after that. I decided to reboot the server to see if the same thing would happpen. This is what df reports now: $ [~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/sda2 9.9G 511M 8.9G 6% / tmpfs 5.9G 0 5.9G 0% /dev/shm /dev/sda1 99M 53M 42M 56% /boot /dev/sda8 883G 384G 455G 46% /home /dev/sdb1 9.9G 151M 9.3G 2% /tmp /dev/sda3 9.9G 7.8G 1.6G 84% /usr /dev/sda5 9.9G 697M 8.7G 8% /var This looks more like what I'd expect to get. For consistency this is what du reports for /var: $ [/var]# du -sh 638M . question: This is a nuisance. I'm not sure where the disk usage reports issued by cpanel get their info but it clearly isn't correct. How can I avoid this inaccurate reporting in the future? It seems like df reporting wrong disk usage is a strong indicator of the source problem but I'm not sure. Is there a way to 'refresh' the filesystem somehow so that the df report is accurate without restarting the server? Any other ideas for resolving this issue?

    Read the article

  • After Upgrading Ubuntu to 9.10 my hard drive now has a warning.

    - by Sean
    it is a 500gb hard drive format as ext3 path /dev/sdc1 The disk utility does not even see this. This Warning is from gparted: e2label: No such device or address while trying to open /dev/sdc1 Couldn't find valid filesystem superblock. Couldn't find valid filesystem superblock. dump2fs 1.41.9 (22-Aug-2009) dumpe2fs: No such device or address while trying to open /dev/sdc1 Unable to read contents of this file system? Because of this some operations may be unavailable. END OF ERROR MESSAGE Did I lose something during the upgrade of the system? Was it the hard drive or the Ubuntu system that went bad?

    Read the article

< Previous Page | 301 302 303 304 305 306 307 308 309 310 311 312  | Next Page >