Search Results

Search found 26810 results on 1073 pages for 'linux scripting'.

Page 379/1073 | < Previous Page | 375 376 377 378 379 380 381 382 383 384 385 386  | Next Page >

  • LXC container can only access host via bridge

    - by vitaut
    I have an LXC container with i686 Ubuntu 12.04 running on a x86_64 Ubuntu 12.04 host. I've set up a bridge using instructions here. However the ping from the container only goes through to the host and not to other machines on the local network. Similarly only the host and not the other machines see the container OS. The host's /etc/network/interfaces file looks as follows: auto lo iface lo inet loopback iface eth0 inet manual auto br0 iface br0 inet dhcp bridge_ports eth0 bridge_fd 0 bridge_maxwait 0 The container's /etc/network/interfaces file looks as follows: auto lo iface lo inet loopback auto eth0 iface eth0 inet dhcp And here's the relevant part of the container's config: lxc.network.type=veth lxc.network.link=br0 lxc.network.flags=up Any ideas what I'm doing wrong? Additional info: The output of iptables-save on host: $ sudo iptables-save # Generated by iptables-save v1.4.12 on Sat Oct 26 06:06:48 2013 *filter :INPUT ACCEPT [6854:721708] :FORWARD ACCEPT [4067:538895] :OUTPUT ACCEPT [4967:522405] COMMIT # Completed on Sat Oct 26 06:06:48 2013 # Generated by iptables-save v1.4.12 on Sat Oct 26 06:06:48 2013 *nat :PREROUTING ACCEPT [82235:21547307] :INPUT ACCEPT [16:1070] :OUTPUT ACCEPT [9386:583359] :POSTROUTING ACCEPT [14693:1291952] -A POSTROUTING -s 10.0.3.0/24 ! -d 10.0.3.0/24 -j MASQUERADE COMMIT # Completed on Sat Oct 26 06:06:48 2013 The output of brctl show on host: $ brctl show bridge name bridge id STP enabled interfaces br0 8000.080027409684 no eth0 vethBkwWyV The output of ifconfig br0 on host: $ ifconfig br0 br0 Link encap:Ethernet HWaddr 08:00:27:40:96:84 inet addr:192.168.1.11 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::a00:27ff:fe40:9684/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:232863 errors:0 dropped:0 overruns:0 frame:0 TX packets:59518 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:34437354 (34.4 MB) TX bytes:198492871 (198.4 MB) The output of ifconfig eth0 on host: $ ifconfig eth0 eth0 Link encap:Ethernet HWaddr 08:00:27:40:96:84 inet6 addr: fe80::a00:27ff:fe40:9684/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:299419 errors:0 dropped:0 overruns:0 frame:0 TX packets:203569 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:59077446 (59.0 MB) TX bytes:372056540 (372.0 MB) The output of ifconfig eth0 on container: $ ifconfig eth0 eth0 Link encap:Ethernet HWaddr 00:16:3e:74:08:2b inet addr:192.168.1.12 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::216:3eff:fe74:82b/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:81 errors:0 dropped:0 overruns:0 frame:0 TX packets:113 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:8506 (8.5 KB) TX bytes:9021 (9.0 KB)

    Read the article

  • bash script with permanent ssh connection

    - by samuelf
    Hi, I use a bash script which runs /usr/bin/ssh -f -N -T -L8888:127.0.0.1:3306 [email protected] However, when I run the bash script, it waits.. I see the connection coming up but the script doesn't exit.. it's like it's waiting for the SSH process to finish, because when I manually kill it the bash script finishes as well. Any ideas how to resolve this? UPDATE: I have croned this script.. and the cron process is the one that becomes a zombie.. the actual scripts runs just fine, sorry about that, with ps -auxf I get: root 597 0.0 0.7 2372 912 ? Ss Jul12 0:00 cron root 2595 0.0 0.8 2552 1064 ? S 02:09 0:00 \_ CRON 1001 2597 0.0 0.0 0 0 ? Zs 02:09 0:00 \_ [sh] <defunct> 1001 2603 0.0 0.0 0 0 ? Z 02:09 0:00 \_ [cron] <defunct> and when I kill the ssh the defuncts disappear.. why would they become defunct?

    Read the article

  • File system concepts (df command)

    - by mkab
    I'm finding it difficult to understand some stuffs about the df command. Suppose I type df and I have the following output Filesystem 1k-blocks Used Avail Capacity Mounted on /dev/da0s1 some number some number number percentage /win /dev/da0s2 some number some number number percentage /win/home /dev/da0s3a some number some number number percentage / devfs some number some number number percentage /dev /dev/da0s3g some number some number number percentage /local /dev/da0s3h some number some number -number 102% /reste /dev/da0s3d some number some number number percentage /tmp /dev/da1s3f some number some number number percentage /usr /dev/da1s3e some number some number number percentage /var /dev/da1s1a some number some number number percentage /public Are the answers to the following questions correct? How many physical drives do I have? Ans: 2. da0s1 and da1s1 How many physical partitions on each disk? Ans: 8 for da0s1 and 1 for da1s1 How many BSD partition on each physical partition Ans: Impossible to determine. We have to use the -T to determine its type How is it possible for the file system /dev/da0s3h filled at 102%? And where is this overflowed data written?Ans: I have no idea for this one Thanks.

    Read the article

  • E: Internal Error, Could not perform immediate configuration (2) on libattr1 ? in Ubuntu

    - by user32178
    Hi, I am working with Ubuntu latest version. While installing via apt-get install i tried to abort that by pressing Ctrl+Z. It terminate successfully ;). But next time when i tried to use apt-get, i got some error "lock" and "temporally unavailable" something like that and **I unfortunately i delete the /var/lib/dkpg folder.** after that i cant install anything from apt-get, getting an error. E: Internal Error, Could not perform immediate configuration (2) on libattr1 so how can i solve this issue?

    Read the article

  • E: Internal Error, Could not perform immediate configuration (2) on libattr1 ? in Ubuntu

    - by user6679
    Hi, I am working with Ubuntu latest version. While installing via apt-get install i tried to abort that by pressing Ctrl+Z. It terminate successfully ;). But next time when i tried to use apt-get, i got some error "lock" and "temporally unavailable" something like that and **I unfortunately i delete the /var/lib/dkpg folder.** after that i cant install anything from apt-get, getting an error. E: Internal Error, Could not perform immediate configuration (2) on libattr1 so how can i solve this issue?

    Read the article

  • Why does the Java VM process eat up more RAM then specified in -Xmx parameter?

    - by evilpenguin
    I have multiple servers running CentOS 5.4 and only one application running on Java VM. I've configured the Java VM with the following arguments: java -Xmx4500M -server -XX:+UseConcMarkSweepGC -XX:+CMSIncrementalMode -XX:NewSize=1024m -Djava.net.preferIPv4Stack=true -Dcom.sun.management.jmxremote=true The machines I'm running the VM on has 6 GB RAM and no other applications running. After a while, the java process starts to hit the swap space really hard, I get this info out of the top command: 7658 root 25 0 11.7g 3.9g 4796 S 39.4 67.3 543:54.17 java On the other hand, if I connect via JConsole, it reports the Java VM has 2.6 GB used, 4.6 GB commited and 4.6 Gb max. java -version returns: java version "1.6.0_17" Java(TM) SE Runtime Environment (build 1.6.0_17-b04) Java HotSpot(TM) 64-Bit Server VM (build 14.3-b01, mixed mode) Why is the Java VM expanding so much past it's allocated heap size? And where does that memory go, if it's not reported in JConsole?

    Read the article

  • Hosting websites in our Workplace custom-built datacentre

    - by i.h4d35
    I'm faced with unique learning opportunity at work at the moment. Due to the slowdown (amongst other reasons), the powers that be at my office have decided to abandon our shared hosting providers (both shared and dedicated hosting) and have decided to host the websites at our office's datacentre. We're running 7 websites, wherein the average unique hits per day at the moment is about 900. We have 2 servers set aside for this - one is a DELL POWER EDGE 1850 (Intel Xeon 3 GHZ*2, 4GB RAM, 73GB HDD and the other is an HP DL 380 G3 (Intel Xeon 2.8 GHz, 6 GB RAM, 73 GB HDD) a) I would like to know the pros and cons of going ahead with this project.All the sites will be hosted on a single IP. In all probability, the OS is going to be CentOS. b) Do you think I should consider Virtualization into this equation (KVM/Xen)? I was thinking in terms of separate instances of the DB server and the frontend though I do not know if this is the best way to go. c) Should I be trying to use cloud stacks like OpenStack and try to make it look like websites hosted on some sort of Public Cloud? (something that I checked out here). Here is something else I came across, which looks similar to what needs to be done at our office. About the websites - Of the 7 websites, 4 are basic static websites which basically gives a whole lot of information about a few local institutions. The remaining 3 are local product-based websites developed in PHP wherein end user can view products and order them online. I am trying to take this as a learning experience wherein I can learn to build something from scratch and save the company a little something in the process. The migration needs to be completed by Easter so I guess it gives us some time (or am I being overly optimistic??). I am confused here and would appreciate all the help I can get. Thanks in advance.

    Read the article

  • Can I recover a nano process from a previous terminal?

    - by davidparks21
    My system crashed while I was in a nano session with unsaved changes. When I log back in via SSH I see the nano process still running when I do a ps. davidparks21@devdb1:/opt/frugg_batch$ ps -ef | grep nano 1001 31714 29481 0 18:32 pts/0 00:00:00 nano frugg_batch_processing 1001 31905 31759 0 19:16 pts/1 00:00:00 grep --color=auto nano davidparks21@devdb1:/opt/frugg_batch$ Is there a way I can bring the nano process back under my control in the new terminal? Or any way to force it to save remotely (from my new terminal)?

    Read the article

  • Debian tuning for increasing read/write buffer.

    - by Claudiu
    Is there a way to modify Debian settings so the memory could be used more for disk read/write caching ? I am already using RAID 0 but thats not enough for multiple users, and the disk is almost struggled. Torrents use the disk very much and rTorrent doesn't have cache settings.

    Read the article

  • Recursively move files in sub-dirs to new sub-dirs of same name

    - by Gabriel
    I have a batch of files all ending with the same string, ie: *_ext.dat located in several sub-dirs along with several other files, in a given main dir. This is the structure: /main_dir/subdir1/file11_ext.dat /main_dir/subdir1/file12_ext.dat /main_dir/subdir1/file13_ext.dat /main_dir/subdir1/file14_other.dat /main_dir/subdir1/file15_other.dat /main_dir/subdir2/file21_ext.dat /main_dir/subdir2/file22_ext.dat /main_dir/subdir2/file23_ext.dat /main_dir/subdir2/file24_other.dat /main_dir/subdir2/file25_other.dat /main_dir/subdir3/file31_ext.dat /main_dir/subdir3/file32_ext.dat /main_dir/subdir3/file33_ext.dat /main_dir/subdir3/file34_other.dat /main_dir/subdir3/file35_other.dat I need to recursively move only the files ending in *_ext.dat into a new main dir, new_dir, respecting the sub-dir structure so the files will end up in an equivalent dir structure like this: /new_dir/subdir1/file11_ext.dat /new_dir/subdir1/file12_ext.dat /new_dir/subdir1/file13_ext.dat /new_dir/subdir2/file21_ext.dat /new_dir/subdir2/file22_ext.dat /new_dir/subdir2/file23_ext.dat /new_dir/subdir3/file31_ext.dat /new_dir/subdir3/file32_ext.dat /new_dir/subdir3/file33_ext.dat Because of this the command should also create those sub-dirs with their corresponding names. I know that with a line like this one: find . -name "*_ext.dat" -print0 | xargs -0 rm -rf I can delete all those files, but I don't know how to modify it to do what I need (or if it is even possible).

    Read the article

  • How do I hook into Tar with BASH?

    - by orb
    Long Story Short I am working with Tar archives that contain PNG images in base64 encoding. I would like to use BASH (or whatever else works) to hook into the extraction function of Tar to decode PNG images from base64 encoding to standard PNG encoding after the files are unpacked. A simple cat $input-file | base64 -d >$output-file will successfully decode the images. Is there a way I can hook into tar -xf so that users do not have to do any (or minimal) extra work to decode the images? In the GNU Tar documentation (http://www.gnu.org/software/tar/manual/html_chapter/Backups.html#SEC97) I found that there are in fact variables reserved to hold the names of functions I desire to be hooked into various moments in Tar program execution. However, the documentation explains that these variables, along with other variables that can be set to configure Tar, are located in a file named backup-specs. Unfortunately, the path to this file is not given. Further, running sudo find / -name backup-specs tells me that this file is not present on my Ubuntu version 13.04 system. Background Information not included in the Long Story Short I have been working on a browser-based (WebGL) particle effect creation application (http://www.particleeffect.org), (https://github.com/cgrabowski/webgl-particle-effect-editor), (https://github.com/cgrabowski/webgl-particle-effect). I have began to write a client-side-only solution for saving and loading effect data as a tar archive. However, since client-side JavaScript has limited capability to process binary data, the images used as textures in the effect are saved with base64 encoding. I have been able to implement saving effect data as a Tar archive (haven't pushed that to Github yet). However, the images present in said Tar archive cannot be manipulated unless they are decoded from base64 encoding.

    Read the article

  • How can I fix puppet refusing to start and asking for "master.pp"?

    - by cwd
    I'm using the very latest version of puppet and have been following the Apress "Pro Puppet" guide step by step. I have installed puppet sudo aptitude install ruby libshadow-ruby1.8 sudo aptitude install puppet puppetmaster facter I have edited /etc/puppet/puppet.conf to include certname [master] certname=puppet.mydomain.com I have edited /etc/hosts and added the following line 127.0.0.1 puppet.mydomain.com puppet I have set the hostname of the server echo "puppet.mydomain.com" > /etc/hostname hostname -F /etc/hostname And then I try and run puppet from the command line. puppet master --verbose --no-daemonize And puppet gives me this error: Could not parse for environment production: Could not find file /master.pp I'm running all commands with sudo and the last line of the error message always says that it can't find master.pp and the path before it is to my current working directory. What am I doing wrong? I should also mention that I don't have a DNS record set up for puppet.mydomain.com - I saw some online documentation mentioning this might be a problem - however I was fairly sure that the hosts file would let me get around that.

    Read the article

  • cannot resolve DNS server's own domain name

    - by sims
    I have a DNS server (mega.dude - 123.123.123.123) running bind 9.4. When I: dig mega.dude I get no answer section. I have nameserver 123.123.123.123 in /etc/resolv.conf Here is my zone file: $TTL 1W @ IN SOA mega.dude. names.mega.dude. ( 2009081502 ; serial 3H ; refresh 15M ; retry 1W ; expiry 1D ) ; minimum NS ns1 NS ns2 MX 10 mail.mega.dude. A 123.123.123.123 @ A 123.123.123.123 ns1 A 123.123.123.123 ns2 A 123.123.123.123 www CNAME @ mail A 123.123.123.123 It didn't used to look like this. I read that it's evil to have an mx record pointing to a CNAME. So I changed that. Then I thought maybe that was also the case for NS. So I changed those too. Still no good. The ports are open. I can't figure it out. Oh by the way, all the other zones return fine. But not the servers own domain. So I know I'm doing something stupid. Thanks for your help all!

    Read the article

  • Unable to Access Certain Websites

    - by codejoust
    Through a local network, all computers except one ubuntu machine can access 1. Adobe.com 2. Icann.org 3. Apache.org 4. Example.com. The ubuntu machine returns (in firefox): "Though the site seems valid, the browser was unable to establish a connection." Furthermore, when I traceroute those websites using the ubuntu machine, they all return ubuntu.local, and it ends there: (traceroute to icann.org (192.0.32.7), 30 hops max, 40 byte packets 1 ubuntu.local (192.168.1.105) 3000.791 ms !H 3000.808 ms !H 3000.814 ms !H I've checked the hosts file, and there isn't anything in there, and I have an apache server there so if it was redirected to localhost, I'd probably see the localhost webroot page. Thanks in advance! user@ubuntu:~$ netstat -nr Kernel IP routing table Destination Gateway Genmask Flags MSS Window irtt Iface 169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth1 192.0.0.0 0.0.0.0 255.0.0.0 U 0 0 0 eth1 0.0.0.0 192.168.1.1 0.0.0.0 UG 0 0 0 eth1 The Ubuntu Machine is one of six on the network. I'm using opendns for dns, so I do think that should be a problem.

    Read the article

  • How To Perform Distributed Website Monitoring?

    - by cballou
    I would like to know how sites like the following perform distributed website monitoring (from multiple checkpoints/countries). pingdom.com, site24x7.com, uptrends.com, siteuptime.com, etc, etc. To be exact, what process would occur in checking if a given domain name went down? If the server finds that the site is down, what is the next step? Would it make a REST API request to a separate server to run the same test and report the results? I have a few theories, including: utilizing host(s) from different countries utilizing proxies from different countries I'm looking for the most proper or correct way to handle this, which can include the usage of servers from multiple countries/hosts.

    Read the article

  • Why won't dhclient use the static IP I'm telling it to request?

    - by mike
    Here's my /etc/dhcp3/dhclient.conf: request subnet-mask, broadcast-address, time-offset, routers, domain-name, domain-name-servers, domain-search, host-name, netbios-name-servers, netbios-scope, interface-mtu; timeout 60; reject 192.168.1.27; alias { interface "eth0"; fixed-address 192.168.1.222; } lease { interface "eth0"; fixed-address 192.168.1.222; option subnet-mask 255.255.255.0; option broadcast-address 255.255.255.255; option routers 192.168.1.254; option domain-name-servers 192.168.1.254; } When I run "dhclient eth0", I get this: There is already a pid file /var/run/dhclient.pid with pid 6511 killed old client process, removed PID file Internet Systems Consortium DHCP Client V3.1.1 Copyright 2004-2008 Internet Systems Consortium. All rights reserved. For info, please visit http://www.isc.org/sw/dhcp/ wmaster0: unknown hardware address type 801 wmaster0: unknown hardware address type 801 Listening on LPF/eth0/00:1c:25:97:82:20 Sending on LPF/eth0/00:1c:25:97:82:20 Sending on Socket/fallback DHCPREQUEST of 192.168.1.27 on eth0 to 255.255.255.255 port 67 DHCPACK of 192.168.1.27 from 192.168.1.254 bound to 192.168.1.27 -- renewal in 1468 seconds. I used strace to make sure that dhclient really is reading that conf file. Why isn't it paying attention to my "reject 192.168.1.27" and "fixed-address 192.168.1.222" lines?

    Read the article

  • avoid check of list of known hosts

    - by shantanuo
    I get the following prompt everytime I try to connect a server using SSH. I type "yes", but is there a way to aovid this? The authenticity of host '111.222.333.444 (111.222.333.444)' can't be established. RSA key fingerprint is f3:cf:58:ae:71:0b:c8:04:6f:34:a3:b2:e4:1e:0c:8b. Are you sure you want to continue connecting (yes/no)?

    Read the article

  • Force an LXC container to use its own IP address

    - by emma sculateur
    Sorry if this question has already been asked. I could not find it, I have this setup : +---------------------------------------------------------------------------------------------+ |HOST | | | | +-------------------------------------------------+ | | | UBUNTU-VM | | | | | | | | +-------------------+ | | | | |UBUNTU-LXC | | +------------------+ | | | | 10.0.0.3/24 | 10.0.0.1/24 | |OTHER VM | | | | | eth0-----lxcbr0----------eth0-----------br0----------eth0 | | | | | | 192.168.100.2/24| 192.168.100.1/24 |192.168.100.3/24 | | | | +-------------------+ | +------------------+ | | +-------------------------------------------------+ | +---------------------------------------------------------------------------------------------+ When I ping 192.168.100.3 from my UBUNTU-LXC, the source IP address is automatically changed to 192.168.100.2 by UBUNTU-VM. It's like having a NAT, whereas I really want my UBUNTU-LXC to talk with it own IP address. Is there any way to do this ? Edit : these info may be relevant : I am using KVM +libvirt to set up my VMs Here is how I create my interface in UBUNTU-VM : <interface type='bridge'> <mac address='52:54:00:cb:aa:74'/> <source bridge='br0'/> <model type='e1000'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/> </interface>

    Read the article

  • Vim Stuck In Insert Mode

    - by Levi Hackwith
    I've been using Vim for several months now via my web host (they allow putty access). All of a sudden, the escape key has become unresponsive. I cannot exist insert or any other mode by simply hitting escape. I have to hit F1 which brings up the help in vim and kicks me into command mode. I'm most certain that my escape key on my keyboard is functioning fine since all of my windows shortcuts that use the escape key operate normally. I know this is a ridiculous question and I'm certain there's a lot more to look into regarding a solution. What I really need is a solid lead as to where to start looking. Things that might help: I'm using vim via putty I'm logging in using jailshell I'm not root

    Read the article

  • GRUB error: unknown filesystem

    - by Ali
    I replaced my old laptop drive which was win7 and ubuntu dual boot with an SSD. Now I connected the old drive through a USB adapter and I want to boot from it. But this comes up: unknown filesystem grub rescue> As i need the programs from old drive I have to boot from it time to time and I don't want to install those software on the new drive. It takes so time to exchange the drives so I want to boot from USB. how can I fix this?

    Read the article

  • Help creating image from LVM

    - by jackhab
    I need to duplicate CentOS hard drive image for multiple stations. The HD has the following layout: Disk /dev/sdb: 250GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 32.3kB 107MB 107MB primary ext3 boot 2 107MB 250GB 250GB primary lvm I saved /dev/sdb1 to file with fsarchiver but for sdb2 I get: /fsarchiver savefs an2.fsa /dev/sdb2 oper_save.c#1006,filesystem_mount_partition(): can't detect and mount filesystem of partition [/dev/sdb2], cannot continue. removed an2.fsa Although fsarchiver probe simple correctly detects sdb2 as LVM2_member. Is fsarchiver correct tool for this job? What's wrong? I'm on Ubuntu 9.1 with fsarchiver 0.6.8 and lvm tools installed. Thanks.

    Read the article

  • Help creating image from LVM

    - by jackhab
    I need to duplicate CentOS hard drive image for multiple stations. The HD has the following layout: Disk /dev/sdb: 250GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 32.3kB 107MB 107MB primary ext3 boot 2 107MB 250GB 250GB primary lvm I saved /dev/sdb1 to file with fsarchiver but for sdb2 I get: /fsarchiver savefs an2.fsa /dev/sdb2 oper_save.c#1006,filesystem_mount_partition(): can't detect and mount filesystem of partition [/dev/sdb2], cannot continue. removed an2.fsa Although fsarchiver probe simple correctly detects sdb2 as LVM2_member. Is fsarchiver correct tool for this job? What's wrong? I'm on Ubuntu 9.1 with fsarchiver 0.6.8 and lvm tools installed. Thanks.

    Read the article

  • SELinux vs. AppArmor vs. grsecurity

    - by Marco
    I have to set up a server that should be as secure as possible. Which security enhancement would you use and why, SELinux, AppArmor or grsecurity? Can you give me some tips, hints, pros/cons for those three? AFAIK: SELinux: most powerful but most complex AppArmor: simpler configuration / management than SELinux grsecurity: simple configuration due to auto training, more features than just access control

    Read the article

  • How does Kerberos work with SSH?

    - by Phil
    Suppose I have four computers, Laptop, Server1, Server2, Kerberos server: I log in using PuTTY or SSH from L to S1, giving my username / password From S1 I then SSH to S2. No password is needed as Kerberos authenticates me Describe all the important SSH and KRB5 protocol exchanges: "L sends username to S1", "K sends ... to S1" etc. (This question is intended to be community-edited; please improve it for the non-expert reader.)

    Read the article

  • Limit download usage for clients

    - by Kumar P
    i am maintaining few windows xp machines under rhel 5 . i want to set quota for download file size. How to do it ? I mean, in lan usar A's maximum donload file size is 300 MB , and user B's maximum download file size in 200 MB. I want to block downloading when user try to download more than 300 MB file.User should not allow to download 300MB file at a time. Or how to set quota for maximum download per day, is there possible to do it ? How can i do this ?

    Read the article

< Previous Page | 375 376 377 378 379 380 381 382 383 384 385 386  | Next Page >