Search Results

Search found 27530 results on 1102 pages for 'write binary'.

Page 171/1102 | < Previous Page | 167 168 169 170 171 172 173 174 175 176 177 178  | Next Page >

  • How to setup ssh's umask for all type of connections

    - by Unode
    I've been searching for a way to setup OpenSSH's umask to 0027 in a consistent way across all connection types. By connection types I'm referring to: sftp scp ssh hostname ssh hostname program The difference between 3. and 4. is that the former starts a shell which usually reads the /etc/profile information while the latter doesn't. In addition by reading this post I've became aware of the -u option that is present in newer versions of OpenSSH. However this doesn't work. I must also add that /etc/profile now includes umask 0027. Going point by point: sftp - Setting -u 0027 in sshd_config as mentioned here, is not enough. If I don't set this parameter, sftp uses by default umask 0022. This means that if I have the two files: -rwxrwxrwx 1 user user 0 2011-01-29 02:04 execute -rw-rw-rw- 1 user user 0 2011-01-29 02:04 read-write When I use sftp to put them in the destination machine I actually get: -rwxr-xr-x 1 user user 0 2011-01-29 02:04 execute -rw-r--r-- 1 user user 0 2011-01-29 02:04 read-write However when I set -u 0027 on sshd_config of the destination machine I actually get: -rwxr--r-- 1 user user 0 2011-01-29 02:04 execute -rw-r--r-- 1 user user 0 2011-01-29 02:04 read-write which is not expected, since it should actually be: -rwxr-x--- 1 user user 0 2011-01-29 02:04 execute -rw-r----- 1 user user 0 2011-01-29 02:04 read-write Anyone understands why this happens? scp - Independently of what is setup for sftp, permissions are always umask 0022. I currently have no idea how to alter this. ssh hostname - no problem here since the shell reads /etc/profile by default which means umask 0027 in the current setup. ssh hostname program - same situation as scp. In sum, setting umask on sftp alters the result but not as it should, ssh hostname works as expected reading /etc/profile and both scp and ssh hostname program seem to have umask 0022 hardcoded somewhere. Any insight on any of the above points is welcome. EDIT: I would like to avoid patches that require manually compiling openssh. The system is running Ubuntu Server 10.04.01 (lucid) LTS with openssh packages from maverick. Answer: As indicated by poige, using pam_umask did the trick. The exact changes were: Lines added to /etc/pam.d/sshd: # Setting UMASK for all ssh based connections (ssh, sftp, scp) session optional pam_umask.so umask=0027 Also, in order to affect all login shells regardless of if they source /etc/profile or not, the same lines were also added to /etc/pam.d/login. EDIT: After some of the comments I retested this issue. At least in Ubuntu (where I tested) it seems that if the user has a different umask set in their shell's init files (.bashrc, .zshrc,...), the PAM umask is ignored and the user defined umask used instead. Changes in /etc/profile did't affect the outcome unless the user explicitly sources those changes in the init files. It is unclear at this point if this behavior happens in all distros.

    Read the article

  • manage a hosted file by email

    - by Toc
    I need a service which allows me to edit a hosted text file by email. For example: I write an email to [email protected] with subject "ADD LINE 10" and sending it the mail body is inserted into myfile which is hosted on a server of someservice.ext. Same for "DELETE LINE 12" or "SUBSTITUTE LINE 15". If I write "ECHO FILE" or something similar, the service should send me an email with the updated content of the text file. Does it exist?

    Read the article

  • How to keep group-writeable shares on Samba with OSX clients?

    - by Oliver Salzburg
    I have a FreeNAS server on a network with OSX and Windows clients. When the OSX clients interact with SMB/CIFS shares on the server, they are causing permission problems for all other clients. Update: I can no longer verify any answers because we abandoned the project, but feel free to post any help for future visitors. The details of this behavior seem to also be dependent on the version of OSX the client is running. For this question, let's assume a client running 10.8.2. When I mount the CIFS share on an OSX client and create a new directory on it, the directory will be created with drwxr-x-rx permissions. This is undesirable because it will not allow anyone but me to write to the directory. There are other users in my group which should have write permissions as well. This behavior happens even though the following settings are present in smb.conf on the server: [global] create mask= 0666 directory mask= 0777 [share] force directory mode= 0775 force create mode= 0660 I was under the impression that these settings should make sure that directories are at least created with rwxrwxr-x permissions. But, I guess, that doesn't stop the client from changing the permissions after creating the directory. When I create a folder on the same share from a Windows client, the new folder will have the desired access permissions (rwxrwxrwx), so I'm currently assuming that the problem lies with the OSX client. I guess this wouldn't be such an issue if you could easily change the permissions of the directories you've created, but you can't. When opening the directory info in Finder, I get the old "You have custom access" notice with no ability to make any changes. I'm assuming that this is caused because we're using Windows ACLs on the share, but that's just a wild guess. Changing the write permissions for the group through the terminal works fine, but this is unpractical for the deployment and unreasonable to expect from anyone to do. This is the complete smb.conf: [global] encrypt passwords = yes dns proxy = no strict locking = no read raw = yes write raw = yes oplocks = yes max xmit = 65535 deadtime = 15 display charset = LOCALE max log size = 10 syslog only = yes syslog = 1 load printers = no printing = bsd printcap name = /dev/null disable spoolss = yes smb passwd file = /var/etc/private/smbpasswd private dir = /var/etc/private getwd cache = yes guest account = nobody map to guest = Bad Password obey pam restrictions = Yes # NOTE: read smb.conf. directory name cache size = 0 max protocol = SMB2 netbios name = freenas workgroup = COMPANY server string = FreeNAS Server store dos attributes = yes hostname lookups = yes security = user passdb backend = ldapsam:ldap://ldap.company.local ldap admin dn = cn=admin,dc=company,dc=local ldap suffix = dc=company,dc=local ldap user suffix = ou=Users ldap group suffix = ou=Groups ldap machine suffix = ou=Computers ldap ssl = off ldap replication sleep = 1000 ldap passwd sync = yes #ldap debug level = 1 #ldap debug threshold = 1 ldapsam:trusted = yes idmap uid = 10000-39999 idmap gid = 10000-39999 create mask = 0666 directory mask = 0777 client ntlmv2 auth = yes dos charset = CP437 unix charset = UTF-8 log level = 1 [share] path = /mnt/zfs0 printable = no veto files = /.snap/.windows/.zfs/ writeable = yes browseable = yes inherit owner = no inherit permissions = no vfs objects = zfsacl guest ok = no inherit acls = Yes map archive = No map readonly = no nfs4:mode = special nfs4:acedup = merge nfs4:chown = yes hide dot files force directory mode = 0775 force create mode = 0660

    Read the article

  • NTFS External Drive takes too long to takes data in Mac ?

    - by mgpyone
    I've an Seagate 500 GB external HD (NTFS). To read/write it on Mac (OSX 10.6.2) , I've tried MacFUSE and NTFS-3G to write my HD on Mac. Though I could be able to see the hard drive, it takes too long to see the contents like this is this normal ? also the data transfer takes too long time and the hard disk becomes too hot . Any suggestions are most welcome.

    Read the article

  • Possible to mount an ext4 partition image via FUSE?

    - by Catskul
    I'm attempting to mount an ext4 partition image in userspace. (no sudo, no special config/permissions modification to /dev/loop0 or /etc/fstab etc). So I'm hoping FUSE will come to the rescue. However it seems that each file system mounted through the FUSE system needs to have a special FUSE driver, and I've not been able to find a linux read-write ext4 FUSE driver for linux. Is there a way to mount ext4 images via FUSE (with write permission)?

    Read the article

  • How to check database has not been unmounted/ marked for overwrite

    - by RPS
    Hi When restoring Exchange with use of VSS API try to catch errors in cases: 1) when restoring database has not been unmounted Exchange 2010 generate error on PreRestore call and write error to Windows Application log -all ok , but for Exchange2007 PreRestore succeded and write error toWindows Application log only 2)when restored database has been unmounted but has not been marked for overwrite Exchange 2007/2010 generate error to Windows Application log but PreRestore call succeded How can I know from application (via VSS API - not from Windows Application log) that error has happend (database has not been unmounted and has not been marked for overwrite. ) Thanks

    Read the article

  • Diff 2 files while ignoring parts of lines

    - by Millianz
    I would like to diff a file system. Currently my bash script prints out the file system recursively into a file (ls -l -R) and diffs it with an expected output. An example for a line in this file would be: drw---- 100000f3 00000400 0 ./foo/ My current diff command is diff "$TEMP_LOG" "$DIFF_FILE_OUT" --strip-trailing-cr --changed-group-format='%' --unchanged-group-format='' "$SubLog" As you can see I ignore additional lines in the current output file, I only care about lines that match with the master output. I now have the problem though that some files may differ in size, or a folder might even have a different name, but due to it's location I know what access rights it should have. For example: Output: ------- 00000000 00000000 528 ./foo/bar.txt Master: ------- 00000000 00000000 200 ./foo/bar.txt Only the size differs here, and it doesn't matter, I would like to just ignore certain parts of the diff, kind of like an ansi c comment. Master: ------- 00000000 00000000 /*200*/ ./foo/bar.txt -- OR -- Master: d------ 00000000 00000000 /*10*/ ./foo//*123123*///*76456546*//bar.txt Output: d------ 00000000 00000000 0 ./foo/asd/sdf/bar.txt And still have it diff correctly. Is this even possible with diff, or will I have to write a custom script for it? Since I'm fairly new to cygwin I might be using the completely wrong tool all together, I'm happy for any suggestions. Update: Taking a step back, here is the general task at hand that I want to achieve. I want to write a script that checks the file system to see if the read/write permissions are set up correctly. The structure of the file system is under my control, so I don't have to worry about it changing too much. Sometimes folders/files might not be present, but if they are their permissions must be checked. For Example assume that the following is a snapshot of the current file system structure drw ./foo drw ./foo/bar -rw ./foow/bar/bar.txt drw ./foo/baz -rw ./foo/baz/baz.txt And this is what the file system structure might dictate, i.e. if these folders / files are present, the permissions must match. drw ./foo drw ./foo/bar -rw ./foo/bar/bar.txt --- ./foo/bar/foobar.txt drw ./foo/baz -rw ./foo/baz/foobaz.txt In this case the file system checked out ok, since all files present match their expected values. The situation becomes more complicated as soon as certain folders might have any arbitrary name, only due to their location I know what their permissions should be. Assume that the directory ./foo/bar in the above example might be such a case, i.e. instead of bar the folder could have any name, but still match the -rw permissions. This seems like a very complicated situation, and I'm not even sure if I can solve it with bash scripting alone. I might have to write an actual application.

    Read the article

  • How can enable credential manager on windows 7

    - by Bastian
    I have a machine with windows 7 in the domain, in the server we share a folders. The problem is when i try con map a drive in the machine with windows 7 i have acces in the folder but i have to write a username and password of the server everytime that power on the machine. i check the credential manager on control panel and the option is disable and i don't know how to enable it save it and don't write the password every time that i log in

    Read the article

  • How should I interpret the specifications of a SSD?

    - by paulgreg
    When considering to buy a SSD, how should I interpret the different specifications of the SSD? Here are some specific things that need to be deciphered: Controller (this can affect performance and endurance more than all other factors combined) Bus Technology Form Factor (Physical Size) Capacity NAND or NOR technology Power Consumption during Read, during Write, when Idle Read/Write Burst and Sustained Throughput All of these things I would like to be explained in more detail and their actual importance in selecting an SSD.

    Read the article

  • Sycronizing/deploying scripts across several systems

    - by otto
    I have a few time consuming tasks that I like to spread across several computers. These tasks require running an identical ruby or python script (or series of scripts that call each other) on each machine. The machines will a separate config file telling the script what portion of the task to complete. I want to figure out the best way to syncronize the scripts on these machines prior to running them. Up until now, I have been making changes to a copy of the script on a network share and then copying a fresh copy to each machine when I want to run it. But this is cumbersome and leaves a chance for error ( e.g missing a file on the copy or not clicking "copy and replace"). Lets assume the systems are standard windows machines that are not dedicated to this task and I don't need to run these scripts all the time (so I don't want a solution that runs 24/7 and always keeps them up to date, I'd prefer something that pushes/pulls on command). My thoughts on various options: Simple adaptation of my current workflow: Keep the originals on the network drive, but write a batch file that copies over the latest version of the scripts so everything is a one-click operation. Requires action on each system, but that's not the end of the world (since each one usually needs their configuration file changed slightly too). Put everything in a Mercurial/Git reposotory and pull a fresh copy onto each node. Going straight to the repo from each machine would guarantee a current version (and would have the fringe benefit of allowing edits to the script to be made from any machine). Cons would be that it requires VCS to be installed on each machine and there might be some pains dealing with authentication since I wouldn't use a public repo. Open up write access on a shared folder and write a script to use rsync (or similar) to push the changes out to all of the machines at once. This gets a current version on every machine (though you would have to change the script if you want to omit a machine or add a new one). Possible issue would be that each computer has to allow write access. Dropbox is a reasonable suggestion (and could work well) but I dont want to use an external service and I'd prefer not to have to have dropbox running 24/7 on systems that would normally not need it. Is there something simple that I am missing? Some tool designed expressly for doing this kind of thing? Otherwise I am leaning toward just tying all of the systems into Mercurial since, while it requires extra software, it is a little more robust than writing a batch file (e.g. if I split part of a script into a separate module, Mercurial will know what to do whereas I would have to add a line to the batch file).

    Read the article

  • How does data I/O takes place on USB Flash Memory ?

    - by user35704
    I want to know how is data I/O takes place on flash drives which are typically EEPROM's . I thought so as I was writing a C Program that involves file handling . For a normal HDD , that would involve returning the file pointer and reading or writing data to the disk which would be done by read/write HEAD . While in EEPROM's there is no read/write head , as it's works on mnemonic commands , So how come does the C file handling program works when I apply it to a file on flash drive ?

    Read the article

  • How to make the new created filed with group rw permission

    - by Master
    I have centos 5.4. I in joomla website , when i install new php script then it created its own folder like images/scriptname Now that folder has only read and write permission for that user or apache user i don't know. I want that all the files created under /home/ directory has group write permission by default. I don't want specific folder or specific user but for all users folder inside /home directory Is it ok to do , or there is any other solution for that

    Read the article

  • My 2GB usb is showing free space-256mb and used space-0 bytes,how to fix it before ?

    - by sabasi
    I have written a image on the usb by the win32diskimager,before I press write it said that writing to a physical drive may curropt the drive,i thought it may also not corrupt so i pressed write,after finishing my work with the usb,i format it but after formating the usb properties says 'usedspace-0 bytes freespace-256mb',how can a 2 gb flash drive change into 256 mb flash drive.?please help me how can i fix it ?is there any way to make it as like as before again ?

    Read the article

  • How can I retrieve statistics from my ghost cast server?

    - by Foxtrot
    I have a GhostCast server running for deploying images. I would like to have each ghost cast session to write to a file ( can be multiple text files or append to one file already there ) statistics. I know this is possible based on the options GhostCast software provides for writing to a log file, but I would like this automated for every image being backed up and restored. I don't want to have my employees click write to a new file every time. Is this possible?

    Read the article

  • SQL Server on VMWare - is transaction log corruption possible?

    - by demp
    Scenario: SQL Server 2005 or 2008, Windows 2008 OS. Running in a VM hosted on VMWare ESX server. Is there any known issue with VMWare when it caches pass-through write request and it never reaches the disk, while SQL Server "thinks" that write actually happened? This may lead to transaction log corruption in case of power failure or VM reboot. Just overheard the conversation but couldn't find it in relation to ESX.

    Read the article

  • Preventing users from deleting SQL data

    - by me2011
    We just purchased a program that requires the users to have an account in the MS SQL server, with read/write access to the program's database. My concern is that since these users will now have write access to the database, they could directly connect to the SQL server outside of the program's client and then mess with the data directly in the tables. Is there anyway I can prevent access to the database while still allowing access via the client program?

    Read the article

  • Network share permission issues

    - by JL
    I have an IIS server running a site, appPool is running under local system, this is done because its easier to have full permissions to certificates and other file based resources on the local server. Problem is when I try write or copy a file to a network share, permissions are obviously not in place on the remote system for the IIS server local system. Is it possible to grant permissions on the remote system to include read/write or even full access to the IIS servers local system account?

    Read the article

  • How to append to a file as sudo? [closed]

    - by obvio171
    Possible Duplicate: sudo unable to write to /etc/profile I want to do: echo "something" >> /etc/config_file But, since only the root user has write permission to this file, I can't do that. But this: sudo echo "something" >> /etc/config_file also doesn't work. Is there any way to append to a file in that situation without having to first open it with a sudo'd editor and then appending the new content by hand?

    Read the article

  • xsl literal with <xsl:if>

    - by Elena
    Hi, I have to write a very simple code in xsl: IF column=0 if result = .34 set background color to green and write $result, but if result = 0.10 set background color to white and write the word "QQQ" and what doesn't work is: <xsl:if test="$result = 0.35 and $column = 0"> <xsl:attribute name='background-color'>#669933</xsl:attribute> <xsl:value-of select="result"/> </xsl:if> <xsl:if test="$result = 0.10"> <xsl:value-of select="QQQ"/> </xsl:if> any suggestions? thanks in advance

    Read the article

  • CDO.Message problem on Windows Server 2008

    - by dcrowell@
    I have a Classic ASP page that creates a CDO.Message object to send email. The code works on Window Server 2003 but not 2008. On 2008 an "Access is denied" error gets thrown. Here is a simple test page I wrote to diagnose the problem. How can I get this to work on Windows Server 2008? dim myMail Set myMail=CreateObject("CDO.Message") If Err.Number < 0 Then Response.Write ("Error Occurred: ") Response.Write (Err.Description) Else Response.Write ("CDO.Message was created") myMail.Subject="Sending email with CDO" myMail.From="[email protected]" myMail.To="[email protected]" myMail.TextBody="This is a message." myMail.Send set myMail=nothing End If

    Read the article

  • Connecting to DB2 from SSIS

    - by Christopher House
    The project I'm currently working on involves moving various pieces of data from a legacy DB2 environment to some SQL Server and flat file locations.  Most of the data flows are real time, so they were a natural fit for the client's MQSeries on their iSeries servers and BizTalk to handle the messaging.  Some of the data flows, however, are daily batch type transmissions.  For the daily batch transmissions, it was decided that we'd use SSIS to pull the data direct from DB2 to either a SQL Server or flat file.  I'm not at all an SSIS guy, I've done a bit here and there, but mainly for situations were we needed to move data from a dev environment to QA, mostly informal stuff like that.  And, as much as I'm not an SSIS guy, I'm even less a DB2/iSeries guy.  Prior to this engagement, my knowledge of DB2 was limited to the fact that it's an IBM product and that it was probably a DBMS flatform (that's what the DB in DB2 means, right?).   One of my first goals when I came onto this project was to develop of POC SSIS package to pull some data from DB2 and dump it to a flat file.  It sounded like a pretty straight forward task.  As always, the devil is in the details.  Configuring the DB2 connection manager took a bit of trial and error.  As such, I thought I'd post my experiences here in hopes that they might save someone the efforts I went through.  That being said, please keep in mind, as I pointed out, I'm not at all a DB2 guy, so my terminology and explanations may not be 100% spot on. Before you get started, you need to figure out how you're going to connect to DB2.  From the research I did, it looks like there are a few options.  IBM has both an OLE DB and .Net data provider which can be found here.  I installed their client access tools and tried to use both the .Net and OLE DB providers but I received an error message from both when attempting to connect to the iSeries that indicated I needed a license for a product called DB2 Connect.  I inquired with one of my client's iSeries resources about a license for this product and it appears they didn't have one, so that meant the IBM drivers were out.  The other option that I found quite a bit of discussion around was Microsoft's OLE DB Provider for DB2.  This driver is part of the feature pack for SQL Server 2008 Enterprise Edition and can be downloaded here. As it turns out, I already had Microsoft's driver installed on my dev VM, which stuck me as odd since I hadn't installed it.  I discovered that the driver is installed with the BizTalk adapter pack for host systems, which was also installed on my VM.  However, it looks like the version used by the adapter pack is newer than the version provided in the SQL Server feature pack.   Once you get the driver installed, create a connection manager in your package just like you normally would and select the Microsoft OLE DB Provider for DB2 from the list of available drivers. After you select the driver, you'll need to enter in your host name, login credentials and initial catalog. A couple of things to note here.  First, the Initial catalog needs to be the same as your host name.  Not sure why that is, but trust me, it just does.  Second, for credentials, in my environment, we're using what the client's iSeries people refer to as "profiles".  I guess this is similar to SQL auth in the SQL Server world.  In other words, they've given me a username and password for connecting to DB, so I've entered it here. Next, click the Data Links button.  On the Data Links screen, enter your package collection on the first tab. Package collection is one of those DB2 concepts I'm still trying to figure out.  From the little bit I've read, packages are used to control SQL compilation and each DB2 connection needs one.  The package collection, I believe, controls where your package is created.  One of the iSeries folks I've been working with told me that I should always use QGPL for my package collection, as QGPL is "general purpose" and doesn't require any additional authority. Next click the ellipsis next to the Network drop-down.  Here you'll want to enter your host name again. Again, not sure why you need to do this, but trust me, my connection wouldn't work until I entered my hostname here. Finally, go to the Advanced tab, select your DBMS platform and check Process binary as character. My environment is DB2 on the iSeries and iSeries is the replacement for AS/400, so I selected DB2/AS400 for my platform.  Process binary as character was necessary to handle some of the DB2 data types.  I had a few columns that showed all their data as "System.Byte[]".  Checking Process binary as character resolved this. At this point, you should be good to go.  You can go back to the Connection tab on the Data Links dialog to perform a couple of tests to validate your configuration.  The Test Connection button is obvious, this just verifies you can connect to the host using the configuration data you've entered.  The Packages button will attempt to connect to the host and create the packages required to execute queries. This isn't meant to be a comprehensive look SSIS and DB2, these are just some of the notes I've come up with since I've started working with DB2 and SSIS.  I'm sure as I continue developing my packages, I'll find more quirks and will post them here.

    Read the article

< Previous Page | 167 168 169 170 171 172 173 174 175 176 177 178  | Next Page >