Search Results

Search found 40479 results on 1620 pages for 'binary files'.

Page 1149/1620 | < Previous Page | 1145 1146 1147 1148 1149 1150 1151 1152 1153 1154 1155 1156  | Next Page >

  • block access to certain website types

    - by frustrated teacher
    Need to block access to certain website types without listing each URL to block. Students at secondary school are going to porn sites. Need to be able to block all such access without having to list each possible site URL. Having the Content -- Ratings tab set to None for all categories on the ratings files listed on my computers does not prevent access. Unchecking users may access sites with no rating, even with the security settings set to High, still allows the porn sites to come up. If that is checked, then ONLY listed sites can open and students would not be able to do any research via google, for example. I would rather not have to continue checking each computer and blocking sites as they find them.

    Read the article

  • File permission set to 644 and wordpress cannot access them?

    - by Joel
    Hi everyone, I'm having problems with WordPress not be able to access files. When installing certain themes and plugins, it comes up with an error saying that it cannot create the directory. When I try to edit style.css it says that I need to make that file writable. The file permissions were set to 644. It wouldn't work until I changed the settings to 777 or 776. WordPress was installed by our local ISP. Anyone got any ideas? It seems that WordPress has not been setup properly. Is there anyway I can fix this without reinstalling the whole thing? Thanks, Joel

    Read the article

  • BSOD after PC has been running for a while

    - by user1389999
    I'm having a problem where my computer is getting a blue screen. I have noticed that the BSOD happens after the PC has been running for about 2 days. The BSOD's seem to be an error with atikmpag.sys with an error code of 0x00000116. The problem started about a month ago.It has happened all five times that I left my computer on that long in the past month. Because my computer is a pre-built one from Dell, and I had upgraded the graphics card (at least a year ago) to a more demanding one, I replaced the stock 360W power supply that came with it. I replaced it with a more powerful, 680W one, because it seemed like the problem could be related to a lack of power supply wattage, but it didn't affect the problem at all. Here are the minidump files for the five BSOD's that I have experienced: https://dl.dropbox.com/u/3488338/bsoddumps.zip System info: Windows 7 Home Premium x64 Dell Studio XPS 435MT Radeon HD 5670 (Version 12.4 of the Catalyst driver) Intel Core i7 920 2.67 GHz 6GB of RAM

    Read the article

  • internet speed and routers are controlled by whom

    - by Ozgun Sunal
    i need to learn two things. each is related to other a bit. The first one is, while our LAN speed is usually 100 Mbps or at gigabit levels(very big compared to WAN speeds), WAN speed for instance DSL connections are far less than this. However, we are able to download huge files at those Mb speeds. Isn't this weird? [my real concern is why WAN speed is lower than LAN speeds] Who controls those routers around the large Internet? (while we, as web clients are connected to Internet, packets travel through those routers to the destination network/s).But, are those routers all inside the ISP network and if not, who controls those large numbers of routers?

    Read the article

  • How to run scripts within a telnet session?

    - by wenzi
    I want to connect to a remote host using telnet there is no username/password verification just telnet remotehost then I need to input some commands for initialization and then I need to repeat the following commands: cmd argument argument is read from a local file, in this file there are many lines, each line is a argument and after runing one "cmd argument", the remote host will output some results it may output a line with string "OK" or output many lines, one of which is with string "ERROR" and I need to do something according to the results. basically, the script is like: initialization_cmd #some initial comands while read line do cmd $line #here the remote host will output results, how can I put the results into a variable? # here I want to judge the results, like if $results contain "OK";then echo $line >>good_result_log else echo $line >> bad_result_log fi done < local_file the good_result_log and bad_result_log are local files is it possible or not? thanks! NOTE: I can't control B, I can only run initial cmds and cmd $line on B

    Read the article

  • Using robocopy and excluding multiple directories

    - by GorrillaMcD
    I'm trying to copy some directories from a server before I restore from backup (my latest backup was corrupt, so I have to use an older one :( ). I'm in the Windows Recovery Environment and have access to the server's file system G:\ and my backup media C:\. But, since I'm more familiar with Linux, I'm having a bit of trouble with the command line in Windows, specifically robocopy. I want to copy multiple directories (maintaining the same directory structure) from G:\ to C:\ while excluding others (namely, the Windows and Program Files folders). I can't figure out the syntax for the /XD option. I was hoping to do something like: robocopy G: C:\backup /CREATE /XD "dir1","dir2", ...

    Read the article

  • Backup of whole harddrive during full operation with Acronis True Image Home 2010

    - by testing
    Currently I'm creating a backup of one of my hard drives. It's my main hard drive, where the operating system is running on. Because the backup is done during full operation I'm asking me if the backup really includes all files (registry, ...). Can I restore the backup on another hard drive and then run the operating system again without problems? Normally I would say that you have to boot from a CD (without running OS) to make a backup. I made a Google research but I didn't found my case so far.

    Read the article

  • windows 2008 server move users to new server

    - by moos3
    I have a new server that is replacing a current windows 2008 server r2. I want to move all the local users and IIS sites to the new box. Is there away to export the two and import them on the new box? I have sync'd all the files for all the sites to the new box. This box doesn't belong to a domain so its not a matter of joining to the domain. The users I'm talking about are the local computer users.

    Read the article

  • htaccess filesMatch exclusion

    - by Hikari
    I have the following directive in my htaccess <filesMatch "\.(gif|jpe?g|png|js|css|swf|php|ico|txt|pdf|xml|html?)$"> FileETag None <ifModule mod_headers.c> Header unset ETag Header set Cache-Control "max-age=0, no-cache, no-store, must-revalidate" Header set Pragma "no-cache" Header set Expires "Wed, 11 Jan 1984 05:00:00 GMT" </ifModule> </filesMatch> I copied that regex from someplace in Web months ago. It should add those headers to any HTTP Response that does NOT have those extensions. But it's not working, it's adding them to any Response. I also need to create another directive to add Header set Cache-Control "max-age=3600, public" to Responses of files that DOES have them. Could anybody help me make proper fileMatch regexes?

    Read the article

  • sudo access for desktop actions in Gnome/KDE?

    - by Jakobud
    I feel kinda silly asking this question. I'm using CentOS 5.4 and KDE. I downloaded an archive and I want to drag/drop the contents into a folder that I need root access to write to. I can obviously go into terminal and sudo blah blah. But how do I get sudo access for desktop procedures? Like for simple dragging and dropping of files? KDE just tells me that I don't have permission to do that, but doesn't give me the option of entering the root password or sudo.

    Read the article

  • Complex nagios command

    - by gonvaled
    I have defined the following command for one of my service checks: define command{ command_name mycommand command_line $USER1$/check_by_ssh -p $ARG1$ -l nagios -i /etc/nagios2/keys/key1 -H $HOSTADDRESS$ -v -C 'source $USER10$ ; command.py -a get --alert-name $ARG2$ -q' } The problem is that it seems that nagios is parsing the command with the semicolon, and producing garbage which can not be executed. I have tried also putting a backslash \;, to no avail. If I run the command directly on the shell, it works. Which means that this is not a problem with check_by_ssh, but a problem on the parsing of the nagios configuration file. How can I debug this? Is there a way to get a listing of all the commands that nagios has parsed when reading the configuration files?

    Read the article

  • Trying to get a new user up on pfSense IPSec VPN; Config file import failed, now getting gateway err

    - by Chris
    Caveat: I am not a sysadmin, so please forgive the n00bness of the query. We have a new user and I'm trying to get them up on VPN. We use pfsense as an IPSEC endpoint. This person is using Shrewsoft for the client. I had created an entry in pfsense for them and then edited a previous user's config file. Shrewsoft didn't import the config file very well and I had to hand edit the information. Now we are getting gateway errors. One thing I've noticed is that there is a different between the values of the preshared key stored on the firewall and the psk stored in the config file. I assume it has something to do with a hash, but I've no idea if that's the case and whether that might be what's causing the problem. Any suggestions greatly appreciated! Tangentially, is there some software used to generate these config files?

    Read the article

  • rsync server side limit bandwidth/connection

    - by c2h2
    In a VOIP application, I have upto 3000 clients rsync audio files from there linux server in a daily, server is placed at a data center (10Mbps in/out bound), the server works as a VOIP sip server running FreeSWITCH (low ping latency should be ensured.) Therefore I would like to have server side control of rsync which controls: Limit total outbound bandwidth. Limit total number of connections. (Reject clients while at max number of connection and let it retry after a specific time frame.) OPTIONAL: list/kill individual connections. Normally I would use ssh + rsync + pem_keys with some extra options, but above requirements are not feasible by simple command lines. Can anyone point me some direction. or show some scripts/tools? I would also probably integrate them and release on github. Thanks!

    Read the article

  • MacVim commands not working in insert mode

    - by paul smith
    The following shortcuts I have defined in my settings: "Select next/prev tabs noremap <C-Tab> :tabnext<CR> noremap <C-S-Tab> :tabprev<CR> are for going to the previous and next tabs of open files. The only annoying thing about it is anytime I want to switch tabs, I have to first get out of insert mode. How can I force MacVim to register these shortcuts even if I'm in insert mode?

    Read the article

  • How can I remount an NFS volume on Red Hat Linux?

    - by user76177
    I changed the user id of a user on an NFS client that mounts a volume from another server. My goal is to get the 2 users to have the same id, so that both servers can read and write to the volume. I changed the id successfully on the client system, but now when I look at the NFS mount from that system, it reports the files being owned by the old id. So it looks like I need to "refresh" that mount. I have found many instructions on how to remount, but each seems slightly different according to the type of system. Is there a simple command I can run to get the mounted volume to refresh so that it interprets the new user settings?

    Read the article

  • Xen Disk Performence Issues

    - by user98651
    I'm currently using Xen PV on CentOS 5 with my domU's as flat files running on a hardware RAID controlled (write cache enabled) formatted with XFS. On the dom0 I can get about 500MB/s in a 2GB dd write from /dev/zero however on the domU's I'm lucky if I get 10MB/s (it is usually around half that). I've tried changing the disk scheduling to NOOP on the domU's, changed some mount parameters and tweaked the performance allocations of both the dom0 (prioritize CPU) and domU's (increase RAM and VCPU allocations). None of these steps have produced any noticeable change in performance. My instinct here is that it is not a hardware problem, due to the solid performance of the dom0. Any ideas on what might be causing this problem? I'm considering moving to LVM based domU's.

    Read the article

  • Windows 7 and ocx installation

    - by Naren
    I am having one application which runs on XP very well. What it does is it downloads some files from server and register one ocx object. In XP after installing I am able to see the entry in the RegEdit with name of that ocx component. But same thing I tried on the Windows 7 and its not registering that ocx component. I having same access rights as XP on windows 7. I am unable to find any entry in RegEdit. I tried installing it manually by Running Command Prompt As Administrator then it installs successfully.. How can I do it automaticallly as it works in XP? Thanks for the help.

    Read the article

  • ftp users configuration in OpenSuse 12

    - by chieroz
    I usually work with MacOSX servers but this time I need to set up a ftp service on a OpenSuse 12.2 server and I am a little lost. I am using the remote YAST2 tool via ssh. I created several users who can connect via ssh and/or ftp, so the basic setup is ok. But when connecting via ftp all my users don't have write permissions. The FTP directory for authenticated users is /srv/www/htdocs, which has permissions root:root. The OpenSuse manual say it's bad practice to change these permissions, but my normal users (even the ones in the sudoers list) cannot upload files. So I am stuck: as a workaround I use rsync, but from time to time I just need to establish a working ftp connection. What's the right approach for users permissions in this scenario? Thanks a lot.

    Read the article

  • Use Windows 7 offline sync with external usb hd

    - by René
    Yeah, truly the whole question in the header. Is there a way to use Windows 7 offline sync (which we know from network mapped drives) with a external usb hd? When not, are there similar built in tools or good third party tools? My scenario: I want to buy a ultrabook with SSD which is mostly limited in space. So I'm going to put all files to a external HD and only store current projects on the local SSD. Let's say I have to change project. It would be easy just change sync folders and have the second project synced to my hd too. With network mapped drives it's such easy. Paths do not differ if the drive is offline so in most situations you don't take notice if the folder is offline. And you only have to activate offline file for the folders you courrently need for work. So is there a similar solution for usb hard drives?

    Read the article

  • How to manage configuration software installations of non-domain Windows XP machines?

    - by Digi
    I have a large set of unattended Windows XP machines who are not connected to a domain or even to each other. I am struggling to find any tools out there that I can use to deal with them in one application. I am hoping to find software that I can perhaps install a client on each machine, then have it essentially proxy out configuration information and possibly commands (install, uninstall, stop service, etc) across the whole network. The closest I've come is Nagios and its client, but it cannot be used to push files through and run commands remotely. Any suggestions?

    Read the article

  • growing EBS RAID volume

    - by Ryan Fernandes
    I've created a RAID0 configuration with two 1GB EBS volumes, mounted at /dev/md0 using mdadm and formatted with XFS Next, I copied some files over to fill the volume to around 30% of its capacity (of 2GB) I then created snapshots of the volumes using ec2-consistent-snapshot and created volumes of the said snapshots but specified the volume size to be 2GB (effective doubling the capacity on each disk) I then spun up a new instance, assembled the RAID0 configuration on /dev/md0 from the 2 volumes mentioned above and mount it to /vol df -hT showed /vol as 2GB (as expected) Now I ran sudo xfs_growfs -d /vol. The command completed normally but reported blocks changed from 523776 to 524160 (only!) and df -hT still showed /vol as 2GB (instead of the expected 4GB) I rebooted, remounted, reassembled the RAID but it still reports the old size. EDIT: trying to grow the RAID using mdadm --grow yields mdadm: raid0 array /dev/md0 cannot be reshaped Is there any other way I can grow a RAID0 array?

    Read the article

  • Self Hosted Dropbox Alternative?

    - by Hutch
    Does anyone know of any self-hosted Dropbox alternatives? We have a need to share files/folders between staff and partners (small scale) and for various reasons we'd prefer to host it ourselves. Sharepoint seems a little too focussed on "check in/check out" and things like webdav/ftp seem a little kludgy. In an ideal world something where you (as an IT person) can setup an area, make a user "owner" and from there they can add their customers would be great. Windows or Virtual Appliance would be ideal.

    Read the article

  • Can I make Puppet's module-to-file mapping to start searching at the top of the modules tree?

    - by John Siracusa
    Consider these two Puppet module files: # File modules/a/manifests/b/c.pp class a::b::c { include b::c } # File modules/b/manifests/c.pp class b::c { notify { "In b::c": } } It seems that when Puppet hits the include b::c directive in class a::b::c, it searches for the corresponding *.pp file by looking backwards from the current class and decides that it find the correct file located at ../../b/c.pp. In other words, it resolves b::c to the same *.pp file that the include b::c statement appears in: modules/a/manifests/b/c.pp I expected it (and would like it) to instead find and load the file modules/b/manifests/c.pp. Is there a way to make Puppet do this? If not, it seems to me that module names may not contain any other module names anywhere within them, which is a pretty surprising restriction.

    Read the article

  • Is there a way to rsync in batches?

    - by Chris
    I have a huge chunk of data (11G) in a subversion repository that I'm using rsync to migrate to Alfresco, which lucene indexes new files as they hit the file system. I'm using a dav mount as a proxy to allow me to rsync. The issue I'm having is the indexing post-rsync is quite an expensive operation for such a huge chunk of data, so I was wondering whether there's a way I could logically separate the rsync into identically-sized batches (say 500MB each) so I could schedule them in cron. At the moment, I'm traversing the top level folders and taking the smallest ones across first, but once I'm done with those, the much larger sub-directories are going to be quite troublesome. Please let me know if you need any further info. Thanks in advance.

    Read the article

  • Windows7 + Dell D830 = frequent blue screens

    - by Mulone
    I'm using Windows 7 on a Dell laptop (Latitude D380). The laptop used to be incredibly stable on Windows XP, but now I experience frequent blue screens (with a message saying "MEMORY DUMP") when I open a lot of applications concurrently. Is there a way to diagnose the issue and track it down? It could be some incompatible drivers or application. I checked the folder C:\Windows\Minidump and I found some files. THen I tried to open them and they disappeared. Any hints?

    Read the article

< Previous Page | 1145 1146 1147 1148 1149 1150 1151 1152 1153 1154 1155 1156  | Next Page >