Search Results

Search found 5786 results on 232 pages for 'eng sub'.

Page 144/232 | < Previous Page | 140 141 142 143 144 145 146 147 148 149 150 151  | Next Page >

  • How can I password-protect a Mac shared folder on a Windows workgroup?

    - by Phillip Oldham
    We have a Mac-mini running 10.5.8 which already acts as a fileserver for our simple Windows (mixed XP/Vista) workgroup. The Mac-mini is on the same workgroup and the files are shared via SMB, FTP, and AFP. Basic file-sharing is working, and has been for some time. We'd now like to add an additional directory/share which can be secured by a password so that only a small number on the network have access. Is this possible? I've already tried creating the additional folder on the mac system, adding it to the shared folders, and limiting it to a specific "shared user", however it's not possible to log-in from an XP machine. Adding a sub-directory to the currently working share and giving limiting it's access to the shared user doesn't work either.

    Read the article

  • dns hierarchy not working !! Please help

    - by nikhilelite
    (DNS1 ,WWW1, Gateway1) (sub-internal network) (DNS0,WWW0,Gateway0) (internal network) DNS1: 192.168.250.3/24 WWW1: 192.168.250.4/24 Gateway1: 192.168.250.1 /24 (internal) :: 192.168.0.150 to 192.168.0.175 (external) DNS0:192.168.0.197/24 WWW0:192.168.0.197/24 Gateway0: 192.168.0.1 (internal) :: 69.94.x.x (external, dynamic ,isp control) Expected behavior: When using dig from internal (192.168.250.0/24) hosts, and query about domain from 192.168.0.197/16 nameserver's hosts (for which its authoritative) , it should return the ip address. Whats happening: After dig, answer section empty, the query is trying to access a.root server instead of 192.168.0.197 ,even though i have defined 192.168.0.197 as dns in gateway1's resolv.conf Why? I need this working asap, can anyone here help ?

    Read the article

  • how can i cahe one more web site on same backend server (web server) with varnish?

    - by Kerberos
    i have one web server which is IIS that is back on varnish. there are more web sites on ISS. there are all web sites header's on IIS and all web sites publish from port 80. can i cache all web site by varnish like below code;backend cacheWebSite{.host = "192.168.0.1"; .port = "80";} sub vcl_recv {if (req.http.host == "www.example1.com") {set req.backend = CacheWebSites;} if (req.http.host == "www.example2.com") {set req.backend = CacheWebSites; } if (req.http.host == "www.example3.com") {set req.backend = CacheWebSites; }} i can't test this code. that is just senario. thank you for your help already now.

    Read the article

  • Windows 2003-R2-Server: Process "System" takes large chunks of CPU time

    - by Dabu
    I have a domain controller running 2003 R2. The server behaves very well when restarted daily, however, each day it is not restarted, there's a process called "System" that takes enourmous chunks of CPU time (up to 95%). The server supports AD, WINS, DNS, has Kaspersky Endpoint Security running, and manages backups via Arcserve 15. When I tried so far: Process Explorer (ex-Sysinternals) shows that the "System" process has no sub-processes. In the "Threads" tab of the detailled view I can see that 90% of the CPU time is used up by "ntkrnlpa.exe+0x803c0". The "Interrupts" process is running at 3-5% of CPU time, I'm not sure if this accounts for the amount of CPU time that System takes.

    Read the article

  • Ubuntu can't install an older version of a package

    - by Trevor Newhook
    When I try to do an apt-get install, I keep getting an error: Depends: libgtk-3-common (= 3.4.1-0ubuntu1) but 3.4.2-0ubuntu0.4 is to be installed when I run sudo apt-get -f install, I get several dpkg: warning: files list file for package 'XXX' missing, assuming package has no files currently installed. then Preparing to replace libgtk-3-bin 3.4.1-0ubuntu1 (using .../libgtk-3-bin_3.4.2-0ubuntu0.4_i386.deb) ... Adding 'diversion of /usr/sbin/update-icon-caches to /usr/sbin/update-icon-caches.gtk2 by libgtk-3-bin' dpkg-divert: error: rename involves overwriting `/usr/sbin/update-icon-caches.gtk2' with different file `/usr/sbin/update-icon-caches', not allowed dpkg: error processing /var/cache/apt/archives/libgtk-3-bin_3.4.2-0ubuntu0.4_i386.deb (--unpack): subprocess new pre-installation script returned error exit status 2 Errors were encountered while processing: /var/cache/apt/archives/libgtk-3-bin_3.4.2-0ubuntu0.4_i386.deb E: Sub-process /usr/bin/dpkg returned an error code (1) I'm not sure why it's complaining about a newer version of a package, but any help would be appreciated

    Read the article

  • High memory utilization by sqlservr.exe process

    - by abdul samad
    Sub:High memory utilization by sqlservr.exe process. When I look into task manager --processes or by using perfmon memory counters(Sqlserver:memory manager:Target server memory and Total server memory) I am getting high memory utilization by sqlservr.exe process nearly 8 GB (Target server memory counter) and 7.95 GB (Total server memory). and when I restart the MSSQLSERVER service it again shoots up to the same size. I am getting this issue quite frequently. Please help me out in identifying why sql server is using so much memory and how to find out what query , stored procedure etc is making sql server use that much memory. * I am not using any triggers or cursors in my code. Thanks

    Read the article

  • With NHibernate, how can I add a child object when updating a parent object?

    - by BMZ
    I have a simple Parent/Child relationship between a Person object and an Address object. The Person object exists in the DB. After doing a Get on the Person, I add a new Address object to the Address sub-object list of the parent, and do some other updates to the Person object. Finally, I do an Update on the Person object. With a SQL trace window, I can see the update to the Person object to the Person table and the Insert of the Address record to the Address table. The issue is that, after the update is performed, the AddressId (primary key on the Address object) is still set to 0, which is what it defaults to when you first initialize the Address object. I have verified that when I do an Add, this value is set correctly. Is this a known issue when trying to add sub-objects as part of an NHibernate UPDATE? Sample code and mapping files are below Thanks <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2"> <class name="BusinessEntities.Wellness.Person,BusinessEntities.Wellness" table="Person" lazy="true" dynamic-insert="true" dynamic-update="false"> <id name="Personid" column="PersonID" type="int"> <generator class="native" /> </id> <version type="binary" generated="always" name="RecordVersion" column="`RecordVersion`"/> <property type="int" not-null="true" name="Customerid" column="`CustomerID`" /> <property type="AnsiString" not-null="true" length="9" name="Ssn" column="`SSN`" /> <property type="AnsiString" not-null="true" length="30" name="FirstName" column="`FirstName`" /> <property type="AnsiString" not-null="true" length="35" name="LastName" column="`LastName`" /> <property type="AnsiString" length="1" name="MiddleInitial" column="`MiddleInitial`" /> <property type="DateTime" name="DateOfBirth" column="`DateOfBirth`" /> <bag name="PersonAddresses" inverse="true" lazy="true" cascade="all"> <key column="PersonID" /> <one-to-many class="BusinessEntities.Wellness.PersonAddress,BusinessEntities.Wellness" / </bag> </class> </hibernate-mapping> <hibernate-mapping xmlns="urn:nhibernate-mapping-2.2"> <class name="BusinessEntities.Wellness.PersonAddress,BusinessEntities.Wellness" table="PersonAddress" lazy="true" dynamic-insert="true" dynamic-update="false"> <id name="PersonAddressId" column="PersonAddressID" type="int"> <generator class="native" /> </id> <version type="binary" generated="always" name="RecordVersion" column="`RecordVersion`" /> <property type="AnsiString" not-null="true" length="1" name="AddressTypeid" column="`AddressTypeID`" /> <property type="AnsiString" not-null="true" length="60" name="AddressLine1" column="`AddressLine1`" /> <property type="AnsiString" length="60" name="AddressLine2" column="`AddressLine2`" /> <property type="AnsiString" length="60" name="City" column="`City`" /> <property type="AnsiString" length="2" name="UsStateId" column="`USStateID`" /> <property type="AnsiString" length="5" name="UsPostalCodeId" column="`USPostalCodeID`" /> <many-to-one name="Person" cascade="none" column="PersonID" /> </class> </hibernate-mapping> Person newPerson = new Person(); newPerson.PersonName = "John Doe"; newPerson.SSN = "111111111"; newPerson.CreatedBy = "RJC"; newPerson.CreatedDate = DateTime.Today; personDao.AddPerson(newPerson); Person updatePerson = personDao.GetPerson(newPerson.PersonId); updatePerson.PersonAddresses = new List<PersonAddress>(); PersonAddress addr = new PersonAddress(); addr.AddressLine1 = "1 Main St"; addr.City = "Boston"; addr.State = "MA"; addr.Zip = "12345"; updatePerson.PersonAddresses.Add(addr); personDao.UpdatePerson(updatePerson); int addressID = updatePerson.PersonAddresses[0].AddressId;

    Read the article

  • Robocopy, do not overwrite existing files, but copy the changed / new ones

    - by I don't know.
    Is it possible to mirror the two directories without overwritting the files in destination directory with new/changed/deleted files. Something like snapshots. Example: Copy the source directory with all files and sub-directories to destination directory, but if destination directory contains, for example, file A.xls and A.xls has been changed in source directory then copy A.xls but keep the previous A in destination directory as well. To preserve the previous file a datestamp, or counter can be added to the file name. Example after copy: SomeDirectory |--A.xls |--A_20120701.xls |--A_20120920.xls Thank you.

    Read the article

  • Map a drive to root of a server (\\sever) in Vista

    - by Andy T
    Hi, In Win XP, I can very easily map a network drive to the root of my NAS server. I browse to it in Explorer (\192.168.1.70), choose "Map Network Drive", choose the drive letter, done. In Vista, this does not seem possible. I have to go "Map Network Drive" from 'Computer', then enter the address, but it will only let me map to specific shares (sub-folders off of the server root) and NOT to the server root share. Since my NAS has built-in shares (music, photo, video, etc.) then I would have to have drive letters for all of these, which I absolutely don't want. Can anyone tell me - how come I can easily map to the server root from XP, but not in Vista? Is there something fundamentally different in the networking across the two OS's? Or do I just need to do things a different way? Hope someone can help. Thanks, AT

    Read the article

  • always true rewritecond

    - by Matt
    My university provides a public_html file in each student's Linux directory so tat each student can have a webpage. I want to put all my PHP scripts into that file and place the index in a sub-directory called webroot. I'm trying to work out a way to have an .htaccess file in the public_html that will redirect ALL requests in that folder to be redirected. There's lots of advice on redirecting any file that doesn't exists but I want to redirect regardless of the existence of a file. Can I use something like RewriteCond TRUE?

    Read the article

  • how to have publishing, blog and wiki features together?

    - by George2
    Hello everyone, I am using SharePoint 2007 Enterprise + Publishing portal template + Windows Server 2008. I want to have blog and wiki features as well as publishing portal features. Any ideas how to integrate publishing portal, blog and wiki? For integrate, I mean using the same user name and password to pass through authentication of publishing portal, blog and wiki. And should I setup 3 different site collections for publishing portal, blog and wiki (I find if I setup publishing portal site collection, I can not create blog and wiki sub-site)? thanks in advance, George

    Read the article

  • Optimal dir strcuture for keeping millions of files on an ext4 system

    - by Alex Flo
    I need to keep millions of files on an ext4 system. I understand that having a structure with multiple subdirectories is the general accepted solution. I wonder what would be the optimal approach in terms of number of dirs/subdirs. For example I tried a structure like 16/16/16/16 (that is, (sub)directories from 1 to 16) and I found that I am able to move 100K files to this structure in 2m50s. When trying to move 100K files to a 8/8/8/8/8/8 structure it took 11 minutes. So the 16/16/16/16 approach seems to be better but I was wondering if anyone has some empirical experience with an even better dir/subdir distribution.

    Read the article

  • Extract part of a image from a big image

    - by rajat
    I have a 6 images , and each image has a certain section that i want to save as a separate image , the problem is that it has to be accurate because i am doing some animation using the sub-image so they should exactly . so I want to accurately extract a that part from each of the 6 images , i can't do it using a image editor in which i have to make the bounding box myself because it will not be accurate , is there any program that lets me do this by like defining a box using numerical values. PS: I don't want to write matlab or opencv program for this .

    Read the article

  • Copy files with filter (XP)

    - by fire
    I have a huge folder (over 6GB) with multiple sub-folders that I want to copy onto an external hard drive, however I do not want it to copy any PDF, EXE or ZIP files across to save space. Is there any software that will help me achieve this? I have looked at TeraCopy but this doesn't seem to have any filter mechanism on it. I am using Windows XP (* sigh *). *edit: found the xcopy command, will this do it? Can anyone help me with the syntax?

    Read the article

  • Copy a single file from main directory recursively across all directories within

    - by chris
    I'm on a dedicated server using CentOS, and on this server I have 5000+ directories in one main directory. In the main directory I have an index.php. I would like to copy this index.php into all 5000+ directories, but the only way I know how is doing it manually. Is there a way through the command line that I can enter something like cp and make it work from the directory? I'd copy it all the way down through all the directories and there sub directories within this main directory I am starting out in.

    Read the article

  • MS project publishing to TFS web portal display

    - by denis bastarache
    So, when we initially created our MPP schedule, I made use of indends / subordinates to break down the project by the various stages of the lifecycle, which is fine... no issues there... But now that I'm trying to publish this over to TFS display, it'll only pick up the actual "action items / sub-tasks" seeing as I have resource allocation specified. So for example I have an "Analysis" phase with a few items underneath, and "System Requirements" phase with the same items, so when I publish these to TFS, it won't display the "Parent" distinction between items, so both "Tasks" instances are being published in TFS under the exact same name... So, if I can't do this Automatically, I'll likely have edit each tasks with "Analysis - Item 1", "Analysis - item 2", "SRD - Item 1", "SRD - item 2"... is there a way to do this automatically, or will have to go the manual route??

    Read the article

  • Splitting Multiple Files in Windows

    - by Justin Boucher
    We have a 21TB LUN full of images that are approx 600K in size in multiple sub folders on the disk. We are trying to split the 21TB LUN into 8 smaller LUNs that are about 2.6TB a piece in order to process the images more effectively. My question is how we can determine what 2.6TB is on the drive? What is the best tool to mark this data so we can copy it to the new smaller LUNs with robocopy or emcopy without overfilling the smaller LUNs? Is there a third-party tool that would be better suited for this task? Thank you in advance for your assistance.

    Read the article

  • Dir and Findstr commands taking a long time to complete in Batch File

    - by user2405934
    dir %DRIVE_NAME%: /S /C /A-D /Q /T:C | findstr ".zip$ .doc$ .xls$ .xpt$ .cpt$ .cpo$ .xlsx$ .pdf$ .dat$ .txt$ .docx$ .csv$" >> file.info I am using above command to list all information in file, as below: 03/27/2013 01:02 PM 86,280 uusr\fr02 h123_frf67_rk_20140327.txt 03/27/2013 01:02 PM 5,513 usr\fr02 h123_frf67_rk_20140328.txt %DRIVE_NAME%: is mapped drive. Folders will be the same; not more than 100 folders and their sub-folders, and there will only be 2 or 3 files at time in any one of the folders. Now the issues is that for one folder it works perfect, but for 80 to 90 folders it is taking too much time. I think it's because of findstr and the different extensions used. Is there any way to make it faster?

    Read the article

  • How to identify RAID (5 or 6) controllers that allow dynamic resize of the array

    - by David Pfeffer
    I'm building a server with a RAID5 array, based on a hardware controller. I want to be able to later add additional disks and have the array rebalance across all of the disks, enlarging the usable size. I also want to be able to later upgrade to bigger disks (one at a time, of course) and then expand the array to fill the entire drive. These features are available in Linux software raid (md). I've also heard they're available in some hardware controllers. Currently, I own the Adaptec RAID 3805 card and the 3ware 9650se card. I'd prefer to use the Adaptec if possible, but I can't find if either of these cards offer this feature. If they don't, are there other affordable (read as: sub-$600) RAID cards available that can accomplish this?

    Read the article

  • how to have files created by CMS have the same ownership as SSH user

    - by Cam
    I am having difficulty on our ubuntu server whereby I have an SSH user that when I create files using this user the ownership is web_user:www-data The problem is when a file is uploaded or created using a content management system like joomla. When files are uploaded through Joomla - such as components / modules... The ownership is set to www-data:www-data This means that I need to then chown all new files to web_user:www-data so we can edit the files. Is there a way to set for a directory and sub-directories that all new files created have the ownership of web_user:www-data? Do I need to use something like setuid or setgid? Any help would be greatly appreciated.

    Read the article

  • how to have files created by CMS have the same ownership as SSH user

    - by Cam
    I am having difficulty on our ubuntu server whereby I have an SSH user that when I create files using this user the ownership is web_user:www-data The problem is when a file is uploaded or created using a content management system like joomla. When files are uploaded through Joomla - such as components / modules... The ownership is set to www-data:www-data This means that I need to then chown all new files to web_user:www-data so we can edit the files. Is there a way to set for a directory and sub-directories that all new files created have the ownership of web_user:www-data? Do I need to use something like setuid or setgid? Any help would be greatly appreciated.

    Read the article

  • Delete recursive directorys with FTP command on Bash

    - by Fake4d
    I have a problem with my infrastructure here. I am in a closed DMZ and have to access a FTP-Server in another DMZ from a headless Suse Linux 10.1. So i think i only got the ftp command.. But i have to delete a directory with about 100 subdirectorys and endless files in it.. When I type del directory it returns "Its not empty" and so i have to delete each sub directory and file manually. Oh please tell me a way how i can do this automatically :)

    Read the article

  • How to remove duplicate illegal site in apache configuration?

    - by zladuric
    I've recently found a referrer in the apache log on my site. Now, I opened it out of curiosity, since my site is live, but I just started development so I didn't expect it. Anyway, the site was a pure copy of mine, and after investigation I saw that it resolves to my IP. I'm on Ubuntu 12.04, Apache 2, drupal 7, don't know what other info can I provide. My question is: how can I tell apache that it should not serve this site? Thanks Edit: forgot to say that I had some bots register to my fresh drupal installation. Also, my domain is a tld, this fake domain is a third level (ie. sub.domain.de)

    Read the article

  • reorder XML elements or set an explicit template with XSLT

    - by Sash
    I tried the solution in my previous question (flattening XML to load via SSIS package), however this isn't working. I now know what I need to do, however I need some guidance on how to do it. So say I have the following XML structure: <person id="1"> <name>John</name> <surname>Smith</surname> <age>25</age> <comment> <comment_id>1</comment_id> <comment_text>Hello</comment_text> </comment> <comment> <comment_id>2</comment_id> <comment_text>Hello again!</comment_text> </comment> <somethingelse> <id>1</id> </somethingelse> <comment> <comment_id>3</comment_id> <comment_text>Third Item</comment_text> </comment> </person> <person id="2"> <name>John</name> <surname>Smith</surname> <age>25</age> <somethingelse> <id>1</id> </somethingelse> </person> ... ... If I am to load this into a SSIS package, as an XML source, what I will essentially get is a table created for each element, as opposed to get a structured table output such as person table (name, surname, age) somethingelse table (id) comment table (comment_id, comment_text) What I end up getting is: person table (person_Id <-- internal SSIS id) name table surname table age table person_name table person_surname table person_comment_comment_id table etc... What I found was that if each element and all inner elements are not in the same format and consistency, i will get the above anomaly which makes it rather complex especially if I am dealing with 80 - 100+ columns. Unfortunately I have no way of modifying the system (Lotus Notes) that produces these reports, so I was wondering whether I may be able to explicitly have an XSLT template that will be able to align each person sub elements (and the sub collection elements such as comments ? Unless there is a quicker way to realign all inner elements. Seems that SSIS XML source requires a very consistent XML file in the sense of: if the name element is in position 1, then all subsequent name elements within person parent have to be in position 1. SSIS seems to pickup the inconsistencies if there are certain elements missing from one parent to another, however, if their ordering is not right (A, B, C)(A, B, C)(A,C,B), it will chuck a massive fuss! All help is appreciated! Thank you in advance.

    Read the article

  • Problems when loop over a series of ssh-ed commands

    - by Jack Medley
    I have a series of server machines which I want to run the same command on. Each command takes hours and (even though I am running the commands using nohup and setting them to run in the background) I have to wait for each to finish before the next starts. Here is roughly how I have set it up: On the host machines: for i in {1..9}; do ssh RemoteMachine${i} ./RunJobs.sh; done Where RunJobs.sh on each remote machine is: source ~/.bash_profile cd AriadneMatching for file in FileDirectory/Input_*; do nohup ./Executable ${file} & done exit Does anyone know of a way such that I dont have to wait for each job to finish before the next starts? Or alternatively a better way of doing this, I have a feeling what I am do is fairly sub-optimal. Cheers, Jack

    Read the article

< Previous Page | 140 141 142 143 144 145 146 147 148 149 150 151  | Next Page >