Search Results

Search found 10989 results on 440 pages for 'wpf commands'.

Page 383/440 | < Previous Page | 379 380 381 382 383 384 385 386 387 388 389 390  | Next Page >

  • router only assigns small number of IPs

    - by Liam Coates
    Been having a problem with my router for a while now, might just be because it is really old but here's the problem: If a lot of computers are connected to my home network someone will get disconnected. They are assigned IPs and it seems like at a certain point (and I don't know how many) you either get assigned the same IP as someone else or something else is happening and you get disconnected - until i soft reset it and it works again which takes 30 secs. I'd say my tablet, my PC, my sisters iPad, 2 laptops and a netbook is the most that can be connected at one time so that is 6 but that should be fine. The only way I know this is the problem is because I turned on my tablet and I was online on my PC, got disconnected but my tablet was still connected, this is just after i turned the tablet on so I know my router is having difficulty with IPs, it is like it assigned the same IP to the tablet which then clashed with my desktop and knocked me off. I see that sometimes the following solves it as well so I wrote a batch file with a menu to execute these commands as I have to do it so often. ipconfig /release ipconfig /flushdns ipconfig /renew Any ideas? Or shall I just get a new router as this one is old and maybe can't handle giving out that many IPs? Cheers!

    Read the article

  • How would I force Debian to use the physical sector size on a hard disk?

    - by Confused User
    I just purchased a few new 3TB WD drives. These have physical 4k sectors, but there is some sort of layer which is providing 512B logical sectors (see the partition table below). In order to attempt to get some more speed out of my hard drives, I would like to get rid of this logical layer and actually use the physical 4k sectors. However, I can't figure out how to do this (or even if it's possible) from the man pages of fdisk and parted, or from searching Google. Does anybody know how this could be done? As to why this is relevant, this page demonstrates that meerly aligning the sectors properly can already make up to a 25% speed difference for reads, and more than 2500% for writes in some cases! Getting rid of the logical sectors in favor of the physicals ones should improve speeds even more. Thanks! $ parted /dev/sdc GNU Parted 2.3 Using /dev/sdc Welcome to GNU Parted! Type 'help' to view a list of commands. (parted) print Model: ATA WDC WD30EZRX-00M (scsi) Disk /dev/sdc: 3001GB Sector size (logical/physical): 512B/4096B Partition Table: gpt Number Start End Size File system Name Flags 1 1049kB 3001GB 3001GB zfs 9 3001GB 3001GB 8389kB P.S. I don't care about the data on the drives, I was just playing with different file systems. Also, this is my first time posting here, so please let me know if my posts should be formatted differently, etc.

    Read the article

  • Cisco ASA 5510 Time of Day Based Policing

    - by minamhere
    I have a Cisco ASA 5510 setup at a boarding school. I determined that many (most?) of the students were downloading files, watching movies, etc, during the day and this was causing the academic side of our network to suffer. The students should not even be in their rooms during the day, so I configured the ASA to police their network segment and limit their outbound bandwidth. This resolved all of our academic issues, and everyone was happy. Except the resident students. I have been asked to change/remove the policing policy at the end of the day, to allow the residents access to the unused bandwidth at night. There's no reason to let bandwidth sit unused at night just because it would be abused during the day. Is there a way to setup Time of Day based Policies on the ASA? Ideally I'd like to be able to open up the network at night and all day during weekends. If I can't set Time based policies, is is possible to schedule the ASA to load a set of commands at a specific time? I suppose I could just setup a scheduled task on one of our servers to log in and make the changes with a simple script, but this seems like a hack, and I'm hoping there is a better or more standard way to accomplish this. Thanks. Edit: If there is a totally different solution that would accomplish a similar goal, I'd be interested in that as well. Free/Cheap would be ideal, but if a separate internet connection is my only other option, it might be worth fighting for money for hardware or software to do this better or more efficiently.

    Read the article

  • Can MS Services for Unix be deployed and accessed from a shared drive?

    - by Ian C.
    I'm interested in experimenting with replacing our dependency on MKS with MS' Sevices for Unix toolset. I was wondering if anyone has any experience with deploying SFU on a shared drive? We like to, wherever possible, host our dev tools on one central NAS and call to the NAS to access the tools instead of rolling stuff out to each and every desktop. I'm not interested in the NFS support or ActiveState Perl. Really, none of the daemon technology is required here. I'm looking for replacements for the coreutils/binutils stuff you find in Linux (and MKS on Windows): sed, awk, csh, bash, grep, ls, find -- the meat-and-potates command line apps that our build and test scripts are built around. If I limit the install to just the Interix GNU Components (and maybe the Remote Connectivity components) will is run nicely from a shared location? To head off some questions: Yes, I've looked at Cygwin. Unfortunately it's performance in our build and test environment is poor. It runs considerably slower than MKS and it's not a direct drop-in replacement for MKS (thanks to its internal pathing and limitations with commands like 'ps'), so it's a tougher sell. Yes, I'm looking at the MinGW offering in parallel to this.

    Read the article

  • Shell not finding binary when attempting to execute it (it's _definitely_ there)

    - by eegg
    I have a specific set of binaries installed at: ~/.GutenMark/binary/<binaries...> These were previously working correctly, but for seemingly no reason when I attempt to execute them the shell claims not to find them: james@anubis:~/.GutenMark/binary$ ls -al ... -rwxr-xr-x 1 james james 2979036 2009-05-10 13:34 GUItenMark ... -rwxrwxrwx 1 james james 76952 2009-05-10 13:34 GutenMark ... -rwxr-xr-x 1 james james 10156 2009-05-10 13:34 GutenSplit ... james@anubis:~/.GutenMark/binary$ ./GutenMark bash: ./GutenMark: No such file or directory james@anubis:~/.GutenMark/binary$ I've tried to isolate the cause of this, with no success. The same happens with zsh, bash, and sh (all giving their appropriate "file not found" error -- this is definitely not a strange output from the binary itself). The same happens either as user james or as root. Nor is it directory specific; if I move the whole directory installation, or just a single binary, to anywhere else, the same happens when attempting to execute it. The same even happens when I put the directory in $PATH and just execute "GutenMark". It also happens when I execute it from a script (I've tried Python's commands module -- though this appears to just call sh). The problem appears to be specific to the binaries themselves, yet they appear to never actually get executed. Any ideas?

    Read the article

  • Mac updated just now, postgres now broken

    - by Dave
    I run postgres 9.1 / ruby 1.9.2 / rails 3.1.0 on a maxbook air for local dev. It's all been running smoothly for months, (though this is the first time I've done development on a mac.) It's a macbook air from last year, and today I got the mac osx software update message as I have a few times before, and my system downloaded approx 450mb of updates and restarted. It now says it's on OSX 10.7.3. Point is, postgres has stopped working, when I start my thin server (mirror heroku cedar) as normal, and then browse to my rails app I get: PG::Error could not connect to server: Permission denied Is the server running locally and accepting connections on Unix domain socket "/var/pgsql_socket/.s.PGSQL.5432"? What happened? After browsing around a few questions I'm still confused, but here's some extra info: Running psql from command line gives same error I can run pgadmin 3 and connect via it and run SQL no problems Running which psql shows the version as /usr/bin/psql I created a PostgreSQL user back when I got the mac (it's always been on lion) I've no idea why, almost certainly I was following a tutorial which I neglected to store in my notes. Point is I am aware there is a _postgres user as well. I know it's rubbish, but apart from a note on passwords, I don't have any extra info on how I configured postgres - though the obvious implication is that I did not use the _postgres user. Anyone have suggestions or information on what might have changed / what I can try to debug and fix? Thanks. Edit: Playing around based on this question and answer: http://stackoverflow.com/questions/7975414/check-status-of-postgresql-server-mac-os-x, see this string of commands: $ sudo su postgreSQL bash-3.2$ /Library/PostgreSQL/9.1/bin/pg_ctl start -D /Library/PostgreSQL/9.1/data pg_ctl: another server might be running; trying to start server anyway server starting bash-3.2$ 2012-04-08 19:03:39 GMT FATAL: lock file "postmaster.pid" already exists 2012-04-08 19:03:39 GMT HINT: Is another postmaster (PID 68) running in data directory "/Library/PostgreSQL/9.1/data"? bash-3.2$ exit

    Read the article

  • How to add Sharepoint Powershell to Console2

    - by BGM
    Salvete! I want to add the Powershell Console for Sharepoint to the tablist in Console2. I already have plain Powershell, but I want the Sharepoint Powershell snapin added automatically. If I look at the properties of the Sharepoint Powershell Console shortcut, I see this: C:\Windows\System32\WindowsPowerShell\v1.0\PowerShell.exe -NoExit " & ' C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\14\CONFIG\POWERSHELL\Registration\\sharepoint.ps1 ' " but that doesn't work in Console2, so I tried this, which doesn't work either: C:\WINDOWS\system32\windowspowershell\v1.0\powershell.exe -PSConsoleFile "C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\14\CONFIG\POWERSHELL\Registration\psconsole.psc1" -NoExit " & ' C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\14\CONFIG\POWERSHELL\Registration\\sharepoint.ps1 ' " Whenever I try, it will load Powershell, but not the Sharepoint Console. I get this: Add-PSSnapin : The Windows PowerShell snap-in 'Microsoft.SharePoint.PowerShell' is not installed on this machine. At C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\14\CONFIG\POWERSHELL\Registration\SharePoint.ps1:3 char:13 + Add-PsSnapin <<<< Microsoft.SharePoint.PowerShell + CategoryInfo : InvalidArgument: (Microsoft.SharePoint.PowerShell:String) [Add-PSSnapin], PSArgumentException + FullyQualifiedErrorId : AddPSSnapInRead,Microsoft.PowerShell.Commands.AddPSSnapinCommand I tried this out, too. Anybody know?

    Read the article

  • How do I Install Intermediate Certificates (in AWS)?

    - by getmizanur
    I have installed private key (pem encoded) and public key certificate (pem encoded) on Amazon Load Balancer. However, when I check the SSL with site test tool, I get the following error: Error while checking the SSL Certificate!! Unable to get the local issuer of the certificate. The issuer of a locally looked up certificate could not be found. Normally this indicates that not all intermediate certificates are installed on the server. I converted crt file to pem using these commands from this tutorial: openssl x509 -in input.crt -out input.der -outform DER openssl x509 -in input.der -inform DER -out output.pem -outform PEM During setup of Amazon Load Balancer, the only option I left out was certificate chain. (pem encoded) However, this was optional. Could this be cause of my issue? And if so; How do I create certificate chain? UPDATE If you make request to VeriSign they will give you a certificate chain. This chain includes public crt, intermediate crt and root crt. Make sure to remove the public crt from your certificate chain (which is the top most certificate) before adding it to your certification chain box of your Amazon Load Balancer. If you are making HTTPS requests from an Android app, then above instruction may not work for older Android OS such as 2.1 and 2.2. To make it work on older Android OS: go here click on "retail ssl" tab and then click on "secure site" "CA Bundle for Apache Server" copy and past these intermediate certs into certificate chain box. just incase if you have not found it here is the direct link. If you are using geo trust certificates then the solution is much the same for Android devices, however, you need to copy and paste their intermediate certs for Android.

    Read the article

  • Exchange Connector Won't Send to External Domains

    - by sisdog
    I'm a developer trying to get my .Net application to send emails out through our Exchange server. I'm not an Exchange expert so I'll qualify that up front!! We've set up a receive Connector in Exchange that has the following properties: Network: allows all IP addresses via port 25. Authentication: Transport Layer Security and Externally Secured checkboxes are checked. Permission Groups: Anonymous Users and Exchange Servers checkboxes are checked. But, when I run this Powershell statement right on our Exchange server it works when I send to a local domain address but when I try to send to a remote domain it fails. WORKS: C:\Windows\system32Send-Mailmessage -To [email protected] -From [email protected] -Subject testing -Body testing -SmtpServer OURSERVER (BTW: my value for OURSERVER=boxname.domainname.local. This is the same fully-qualified name that shows up in our Exchange Management Shell when I launch it). FAILS: C:\Windows\system32Send-Mailmessage -To [email protected] -From [email protected] -Subject testing -Body testing -SmtpServer OURSERVER Send-MailMessage : Mailbox unavailable. The server response was: 5.7.1 Unable to relay At line:1 char:17 + Send-Mailmessage <<<< -To [email protected] -From [email protected] -Subject testing -Body himom -SmtpServer FTI-EX + CategoryInfo : InvalidOperation: (System.Net.Mail.SmtpClient:SmtpClient) [Send-MailMessage], SmtpFailed RecipientException + FullyQualifiedErrorId : SmtpException,Microsoft.PowerShell.Commands.SendMailMessage EDIT: From @TheCleaner 's advice, I ran the Add-ADPermission to the relay and it didn't help; [PS] C:\Windows\system32Get-ReceiveConnector "Allowed Relay" | Add-ADPermission -User "NT AUTHORITY\ANONYMOUS LOGON" -E xtendedRights "Ms-Exch-SMTP-Accept-Any-Recipient" Identity User Deny Inherited -------- ---- ---- --------- FTI-EX\Allowed Relay NT AUTHORITY\ANON... False False Thanks for the help. Mark

    Read the article

  • Using psftp to upload and download files

    - by macha
    Hello I am trying to upload and download files from my desktop to my server. Now after some search I did download psftp. I used to use filezilla earlier, but I cannot install it on my desktop due to a few reasons. Since psftp (similar to putty) is just an executable for file transfer. So now after going through this link http://www.math.tamu.edu/~mpilant/math696/psftp.html. I understood that put and get are two commands I would use to download and upload files. Now when I logon to the server and say get filename, it actually is throwing back an error "local: unable to open filename". I tried that with other files too, and I end up getting the same error. The psftp.exe file is on my desktop. The process that I am using is I double click the .exe file open "servrname" cd /path/where/files/are get "filename" And I get this error "local: unable to open filename". Am I making a mistake or is it a problem with this executable?

    Read the article

  • Unable to specify parameters to cvlc in a script

    - by VxJasonxV
    I'm creating a script that issues a few curl commands in order to access a time-protected mms stream link, then set up a relay using cvlc (vlc's command line interface) for my own use on an unencumbered player. The curl aspect of this is working, as I can run as a browser and curl side by side and get the same access url. (It's time locked meaning the stream will work forever, but you have to connect quickly or the URL will time out.) The very end of the script prints the command I will run, which is then followed up by "exec $CMD". When I echo $CMD I get: cvlc --sout '#standard{access=http,mux=asf,dst=0.0.0.0:58194}' mms://[...] Manually Copy/Pasting this command in, verbatim, works perfectly fine, but as part of a script, the cvlc execution output says: [0x9743d0] main interface error: no suitable interface module [0x962120] main libvlc error: interface "globalhotkeys,none" initialization failed [0x9743d0] dummy interface: using the dummy interface module... [0xb16e30] stream_out_standard stream out error: no mux specified or found by extension [0xb16ad0] main stream output error: stream chain failed for `standard{mux="",access="",dst="'#standard{access=http,mux=asf,dst=0.0.0.0:58194}'"}' [0xb11cd0] main input error: cannot start stream output instance, aborting [0xb11f70] signals interface error: Caught Interrupt signal, exiting... Why is --sout behaving one way in a script (non-interactive shell?) vs. another way in the foreground (interactive shell) ?

    Read the article

  • Tool to Save a Range of Disk Clusters to a File

    - by Synetech inc.
    Hi, Yesterday I deleted a (fragmented) archive file only to find that it did not extract correctly, so I was left stranded. Fortunately there was not much space free on the drive, so most of the space marked as free was from the now-deleted archive. I pulled up a disk editor and—painfully—managed to get a list of cluster ranges from the FAT that were marked as unused. My task then was to save these ranges of clusters to files so that I could examine them to try to determine which parts were from the archive and recombine them to attempt to restore the deleted file. This turned out to be a huge pain in the butt because the disk editor did not have the ability to select a range of clusters, so I had to navigate to the start of each cluster and hold down Ctrl+Shift+PgDn until I reached the end of the range (which usually took forever!) I did a quick Google search to see if I could find a command-line tool (preferably with Windows and DOS versions) that would allow me to issue a commands such as: SAVESECT -c 0xBEEF 0xCAFE FOO.BAR ::save clusters 0xBEEF-0xCAFE to FOO.BAR SAVESECT -s 1111 9876 BAZ.BIN ::save sectors 1111-9876 to BAZ.BIN Sadly my search came up empty. Any ideas? Thanks!

    Read the article

  • qsub: How can I find out what DRM middleware exactly is installed on a cluster?

    - by gojira
    I have a user account on a very big cluster. I have previous experience with Grid Engine and want to use the cluster for array jobs. The documentation tells me to use "qsub" for load balancing / submission of many jobs. Therefore I assumed this means the cluster has Grid Engine. However all my Grid Engine scripts failed to run. I checked the documentation and it is a bit weird. Now I slowly suspect that this cluster does not actually have Grid Engine, maybe it's running something called Torque (?!). The whole terminology in the man pages is a bit weird for me as a Grid Engine user, for example they talk about "bulk jobs" instead of "array jobs". There is no referral to variables on which I rely on, like SGE_TASK_ID etc. Instead they refer to variables starting with PBS_. Still, there are qsub and qstat commands. Also qsub behaves differently, apparently it is not possible to specifiy the command line parameters with bash-script comments etc. There is a documentation for the cluster system, but it does not say what the DRM middleware actually is - it refers to the entire DRM system simply as "qsub". I tried qsub --version qsub: 1.2 2010/8/17 I am not sure what I am actually running when I invoke qsub on that cluster! My question is, how can I find out if I am running Grid Engine or Torque (or whatever it is), and which version?

    Read the article

  • Disc drive busy on MacBook, with disc stuck inside.

    - by ayaz
    I have a white MacBook, running Snow Leopard, 10.6.3 with the latest updates. I popped in a DVD that the system failed to mount. I did not see any conspicuous errors. As a result of this failure, the DVD got stuck in the drive. Neither pressing the eject button on the keyboard nor running the diskutil eject command caused the DVD to come out. The commands drutil eject and drutil tray open could not get the DVD to budge at all. The 'mount' and 'eject' buttons on the window for Disk Utility are dimmed out, while it is written in the middle for the DVD drive that that particular disc drive is busy. This is not the first time this has happened with me. I know that I will ultimately have to resort to rebooting the system and holding down the eject button to get the DVD to come out. But, is there any workaround that does not involve rebooting the system and prying the disc out? The drive on this MacBook does not have a needle-pin reset button -- at least, I couldn't find it anywhere. Any help will be greatly appreciated. Many thanks.

    Read the article

  • Cannot load from raid with grub

    - by Andrew Answer
    I have a RAID1 array on my Ubuntu 12.04 LTS and my /sda HDD has been replaced several days ago. I use this commands to replace: # go to superuser sudo bash # see RAID state mdadm -Q -D /dev/md0 # State should be "clean, degraded" # remove broken disk from RAID mdadm /dev/md0 --fail /dev/sda1 mdadm /dev/md0 --remove /dev/sda1 # see partitions fdisk -l # shutdown computer shutdown now # physically replace old disk by new # start system again # see partitions fdisk -l # copy partitions from sdb to sda sfdisk -d /dev/sdb | sfdisk /dev/sda # recreate id for sda sfdisk --change-id /dev/sda 1 fd # add sda1 to RAID mdadm /dev/md0 --add /dev/sda1 # see RAID state mdadm -Q -D /dev/md0 # State should be "clean, degraded, recovering" # to see status you can use cat /proc/mdstat After bebuilding completion "fdisk -l" says what I have not valid partition table /dev/md0. So 1) "update-grub" find only /sda and /sdb Linux, not /md0 2) "dpkg-reconfigure grub-pc" says "GRUB failed to install the following devices /dev/md0" I cannot load my system except from /sdb1 and /sda1, but in DEGRADED mode... This is my partial fdisk -l output: Disk /dev/sdb: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders, total 976773168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000667ca Device Boot Start End Blocks Id System /dev/sdb1 * 63 940910984 470455461 fd Linux raid autodetect /dev/sdb2 940910985 976768064 17928540 5 Extended /dev/sdb5 940911048 976768064 17928508+ 82 Linux swap / Solaris Disk /dev/md0: 481.7 GB, 481746288640 bytes 2 heads, 4 sectors/track, 117613840 cylinders, total 940910720 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/md0 doesn't contain a valid partition table Anybody can resolve this issue? I have big headache with this.

    Read the article

  • Any non-custom way to manage iptables with fail2ban and libvirt+kvm?

    - by Peter Hansen
    I have an Ubuntu 9.04 server running libvirt/kvm and fail2ban (for SSH attacks). Both libvirt and fail2ban integrate with iptables in different ways. Libvirt uses (I think) some XML config and during startup (?) configures forwarding to the VM subnet. Fail2ban installs a custom chain (probably at init) and periodically modifies it to ban/unban probable attackers. I also need to install my own rules to forward various ports to servers running in VMs and on other machines, and set up rudimentary security (e.g. drop all INPUT traffic except the few ports I want open), and of course I'd like the ability to add/remove rules safely without restarting. It seems to me iptables is a powerful tool that's sorely lacking some sort of standardized way of juggling all this stuff. Every project, and every sysadmin, seems to do it differently! (And I think there's lots of "cargo cult" admin going on here, with people cloning crude approaches like "use iptables-save like so".) Short of figuring out the gory details of exactly how both of these (and potentially other) tools manipulate the netfilter tables, and developing my own scripts or just manually executing iptables commands, is there any way to safely work with iptables while not breaking the functionality of these other tools? Any nascent standards or projects defined to bring sanity to this area? Even a helpful web page I missed that might cover at least these two packages together?

    Read the article

  • gcc built by crosstool-ng gives undefined reference

    - by netvope
    I've successfully built a toolchain using crosstool-ng with the default configuration named x86_64-unknown-linux-gnu. The documentation says: Using the toolchain is as simple as adding the toolchain's bin directory in your PATH, such as: export PATH="${PATH}:/your/toolchain/path/bin" and then using the target tuple to tell the build systems to use your toolchain: ./configure --target=your-target-tuple or make CC=your-target-tuple-gcc or make CROSS_COMPILE=your-target-tuple- and so on... I followed the instructions and attempted to build GNU tar (tar-1.25.tar.bz2) with the toolchain. The commands ./configure --target=x86_64-unknown-linux-gnu and make CROSS_COMPILE=x86_64-unknown-linux-gnu- do not work (the build will succeed, but it uses the host system's gcc). The command make CC=x86_64-unknown-linux-gnu-gcc works, but in the very last step when it tries to link, it returns errors like this: compare.o: In function `openat': /dev/shm/x-tools/x86_64-unknown-linux-gnu/x86_64-unknown-linux-gnu/sys-root/usr/include/bits/fcntl2.h:134: undefined reference to `__openat_2' What could be the problem? Was the toolchain not properly setup? Perhaps x86_64-unknown-linux-gnu-gcc is using the header files from the host system but could not find the libraries in the target's sys-root?

    Read the article

  • One-To-Many Powershell Scripts

    - by Matt
    I'm trying to create a script to run as a scheduled task, which will run against multiple servers and retrieve some information. To start with, I populate the list of servers by querying AD for all servers that match a certain set of criteria, using Get-ADComputer. The problem is, the list is returned as an object, which I can't then pass to the New-PSSession list. I have tried converting it to a comma-seperated string by doing the following: foreach ($server in $serverlist) {$newlist += $server.Name + ","} but this still doesn't work. the alternative is to iterate through the list and run the various commands against each server one at a time, but my preference would be to avoid this and run them using one-to-many remoting. UPDATE: To clarify what I want to end up being able to do is using -ComputerName $serverlist, so I want $serverlist to be a string rather than an object. UPDATE 2: Thanks for all the suggestions. Between them and my original method I'm starting to wonder whether -ComputerName can accept a string variable? I've got varying degrees of success getting the list of computers converted to a comma separated string, but no matter how I do it I always get invalid network address.

    Read the article

  • Programs don't have permissions when using absolute path

    - by Markos
    I have asked this on askubuntu but didn't get a single response in days, so I will try it here. I have directory structure like this: /path/dir1 - all users in group1 must have rwx permissions, including subdirs and newly created dirs /path/dir1/dir2 - also users in group2 must have rwx permissions So what I tried is that I used ACL. getfacl /path/dir1 # file: /path/dir1 # owner: root # group: nogroup user::rwx group::--- group:group1:rwx mask::rwx other::--- default:user::rwx default:group::--- default:group:group1:rwx default:mask::rwx default:other::--- getfacl /path/dir1/dir2 # file: /path/dir1/dir2 # owner: root # group: nogroup user::rwx group::--- group:group1:rwx group:group2:rwx mask::rwx other::--- default:user::rwx default:group::--- default:group:group1:rwx default:group:group2:rwx default:mask::rwx default:other::--- That shows that I have granted rwx to group1 in /path/dir1 and rwx to group1 and group2 in /path/dir1/dir2. Now it gets interesting. Let's assume, that user2 is member of group2. If I issue commands as user2: cd /path/dir1/dir2 mkdir foo Then folder is succesfully created. However, if I do this: mkdir /path/dir1/dir2/foo I get permission denied error. I have tried extensively to resolve the problem. What I have found is that ACL is to blame. If I add permissions to group2 in /path/dir1 it starts to work. Also if I completely remove /path/dir1 ACL it starts to work. Obviously I am missing something VERY basic. I don't have much experience with linux, but this is a no-brainer on Windows. I have spent way too many hours to resolve this basic requirement. If you need more information, I will try to update the question, so feel free to ask!

    Read the article

  • converting a png with an ICC profile?

    - by jedierikb
    I can convert a jpg from one ICC to another ICC. convert rgb_image.jpg -profile USCoat.icm cmyk_image.jpg Or I can convert a jpg with no ICC to another ICC. convert rgb_image.jpg +profile icm \ -profile sRGB.icc -profile USCoat.icm cmyk_image.jpg But how do I convert a png's pixels into the gamut described by an ICC profile? I understand I cannot embed the profile into the image file, but would at least like to convert the colors. When I reuse the above commands, the colors come out wrong... (different from the colors in the JPG when converted). This is the source image: http://alumni.media.mit.edu/~erikb/tmp/RED_JPG.jpg And here is what I am trying: convert RED_JPG.jpg +profile icm -profile sRGB_v4_ICC_preference.icc -profile USWebUncoated.icc CMYK_PNG.png and this is what I am getting: http://alumni.media.mit.edu/~erikb/tmp/CMYK_PNG.png I was hoping to get an image with the same colors as a JPEG run through the same command: convert RED_JPG.jpg +profile icm -profile sRGB_v4_ICC_preference.icc -profile USWebUncoated.icc CMYK_JPG.jpg resulting in: http://alumni.media.mit.edu/~erikb/tmp/CMYK_JPG.jpg *this image, CMYK_JPG.jpg, is what I am trying to reproduce pixel by pixel in a PNG file.* Any suggestions? Original (unanswered) post here.

    Read the article

  • Ubuntu 10.04 RGBA(Transparent theme) bug

    - by Nrew
    I tried to follow this tutorial frome ghacks.net. But I end up with a bug. Everytime I try to change the desktop background or the theme. It opens up lots of folders. And then close it back again. So I cannot do anything when I try to change the background or the theme to the default. Here is the tutorial And I can't even shutdown my machine now, please help. EDIT I tried executing these commands in the terminal sudo add-apt-repository ppa:erik-b-andersen/rgba-gtk sudo apt-get update && sudo apt-get upgrade sudo apt-get install gnome-color-chooser gtk2-module-rgba sudo apt-get install murrine-themes And it installed gnome color changer. It works, and the windows became transparent, but I can't change to the default theme and I can't put a background Here's the screenshot, when I try to change something in the preferences: Its just okay if I can't change the theme, but it won't let me change the background either. Or maybe the background is changed but its transparent just like the other windows. You can see that in the taskbar(or whatever its called in Ubuntu). Its full. What can I do to atleast make the background visible.

    Read the article

  • iptables secure squid proxy

    - by Lytithwyn
    I have a setup where my incoming internet connection feeds into a squid proxy/caching server, and from there into my local wireless router. On the wan side of the proxy server, I have eth0 with address 208.78.∗∗∗.∗∗∗ On the lan side of the proxy server, I have eth1 with address 192.168.2.1 Traffic from my lan gets forwarded through the proxy transparently to the internet via the following rules. Note that traffic from the squid server itself is also routed through the proxy/cache, and this is on purpose: # iptables forwarding iptables -A FORWARD -i eth1 -o eth0 -s 192.168.2.0/24 -m state --state NEW -j ACCEPT iptables -A FORWARD -m state --state ESTABLISHED,RELATED -j ACCEPT iptables -A POSTROUTING -t nat -j MASQUERADE # iptables for squid transparent proxy iptables -t nat -A PREROUTING -i eth1 -p tcp --dport 80 -j DNAT --to 192.168.2.1:3128 iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 80 -j REDIRECT --to-port 3128 How can I set up iptables to block any connections made to my server from the outside, while not blocking anything initiated from the inside? I have tried doing: iptables -A INPUT -i eth0 -s 192.168.2.0/24 -j ACCEPT iptables -A INPUT -i eth0 -j REJECT But this blocks everything. I have also tried reversing the order of those commands in case I got that part wrong, but that didn't help. I guess I don't fully understand everything about iptables. Any ideas?

    Read the article

  • Shrinking a large transaction log on a full drive

    - by Sam
    Someone fired off an update statement as part of some maintenance which did a cross join update on two tables with 200,000 records in each. That's 40 trillion statements, which would explain part of how the log grew to 200GB. I also did not have the log file capped, which is another problem I will be taking care of server wide - where we have almost 200 databases residing. The 'solution' I used was to backup the database, backup the log with truncate_only, and then backup the database again. I then shrunk the log file and set a cap on the log. Seeing as there were other databases using the log drive, I was in a bit of a rush to clean it out. I might have been able to back the log file up to our backup drive, hoping that no other databases needed to grow their log file. Paul Randal from http://technet.microsoft.com/en-us/magazine/2009.02.logging.aspx Under no circumstances should you delete the transaction log, try to rebuild it using undocumented commands, or simply truncate it using the NO_LOG or TRUNCATE_ONLY options of BACKUP LOG (which have been removed in SQL Server 2008). These options will either cause transactional inconsistency (and more than likely corruption) or remove the possibility of being able to properly recover the database. Were there any other options I'm not aware of?

    Read the article

  • How can I avoid a few seconds of blank video when using -vcodec copy?

    - by arlomedia
    I'm processing user-uploaded videos on a CentOS web server with ffmpeg. I need to convert each video to a standard size and format, then extract a 30-second sample clip from each video. I want to use the "-vcodec copy" flag in the extraction command to avoid encoding a second time. This command works for my initial conversion: ffmpeg -i uploaded.mov -f mp4 -vcodec libx264 -vpre medium -acodec libfaac -r 15 -b 360k -ab 48k -ar 22050 -s 480x320 formatted.mp4 And this sometimes works for the extraction: ffmpeg -i formatted.mp4 -vcodec copy -acodec copy -ss 0 -t 30 formatted_sample.mp4 However, when I run the extraction command on some videos, the extracted sample clip starts with several seconds of blank video. The audio starts right away but the video doesn't start for 3-6 seconds. To demonstrate the problem, I've uploaded two video clips and run the above commands on them. I created the first clip in Final Cut Express and encoded it with Handbrake before uploading to the web server: 1a) uploaded clip 1b) converted with first command 1c) extracted with second command, missing first six seconds By comparison, this second clip comes from Apple's website and does not show the problem: 2a) uploaded clip 2b) converted with first command 2c) extracted with second command, no problem Can anyone see what's different about the two source clips? And if so, is there anything I can do in my conversion command so that when the extraction command runs, the clip is set up to avoid the missing video? By the way, I initially had the problem with ffmpeg 0.6.1 installed from yum, but I upgraded to the latest git version and the problem remains.

    Read the article

  • Production monitoring for EC2 instances

    - by Janine
    I'm setting up my first production instance on EC2 and want to make sure I have all necessary monitoring in place. There are three different types of things I want to monitor: Is the instance running? EC2 instances can be terminated without warning if the underlying hardware fails, and as far as I know they aren't automatically restarted. So if not, start it back up. Is UNIX running properly? This is the usual stuff about CPU load, disk space, etc. Is the website responding? If not, restart it. I initially set up Nagios on a physical server outside the cloud, but it is really only helpful for item 2. It can tell me if the instance is gone or if the website is not responding, but as far as I can tell it can't execute any commands to fix the situation. My Googling on this subject has yielded a plethora of options - Cacti, Monit, God, Ganglia, and probably more I'm forgetting now. I don't have time to research them all. I am aware of Amazon's Cloudwatch but it doesn't seem to do anything that my Nagios installation doesn't already do. If you already have something like this in place, can you please share what has worked well for you?

    Read the article

< Previous Page | 379 380 381 382 383 384 385 386 387 388 389 390  | Next Page >