Search Results

Search found 13911 results on 557 pages for 'sharepoint group'.

Page 373/557 | < Previous Page | 369 370 371 372 373 374 375 376 377 378 379 380  | Next Page >

  • Apache 2 Symbolic link not allowed or link target not accessible

    - by djechelon
    While the title of this question matches an already asked question, in my case I already set Options +FollowSymLinks. The setup is the following: my hosting setup includes htdocs/ directory that is the default document root for HTTP websites and htdocs-secure that is for HTTPS. They are meant for sites that need a different HTTPS version. In case both share the same files I create a link from htdocs-secure to htdocs by ln -s htdocs htdocs-secure but here comes the problem! Log still says Symbolic link not allowed or link target not accessible: /path/to/htdocs-secure Vhost fragment Header always set Strict-Transport-Security "max-age=500" DocumentRoot /path/to/htdocs-secure <Directory "/path/to/htdocs-secure"> allow from all Options +FollowSymLinks </Directory> I think it's a correct setup. The HTTP version of the site is accessible, so it doesn't look like a permission problem. How to fix this? [Add] other info: I use MPM-itk and I set AssignUserId to the owner/group of both the directories

    Read the article

  • Installing GitBlit GO as Service in Ubuntu Server 14.04

    - by Luis Masuelli
    I downloaded it (version 1.6.0), unpacked it in /opt/gitblit (ubuntu server 14.04.1), configured http to 8280 and disabled https assigning 0 (I expose it by https using nginx). I created gitblit user and added it to 'sudo' group by running: sudo adduser gitblit sudo (gitblit user has a strong password). I installed it as a service by running: /opt/gitblit/install-service-ubuntu.sh. I tried to start it by running: sudo service gitblit start. The message Starting gitblit server appears. It's the only message. When I hit -in the same local machine- http://127.0.0.1:8280, the connection could not be made. When I run sudo netstat -anp | grep 8280, nothing appears. I see no error messages, but the server is not starting. Question: What am I missing?

    Read the article

  • Why is the reported user "root" when a "normal user" executes "ps ux" on OS X? Is this normal behavi

    - by snies
    I am running OS X 10.6.1 . When i am logged in as a normal user of group staff and do a ps ux it lists my ps ux command as being run by root: snies 181 0.0 0.3 2774328 12500 ?? S 6:00PM 0:20.96 /System/Library... root 1673 0.0 0.0 2434788 508 s001 R+ 8:16AM 0:00.00 ps ux snies 177 0.0 0.0 2457208 984 ?? Ss 6:00PM 0:00.52 /sbin/launchd snies 1638 0.0 0.0 2435468 1064 s001 S 8:13AM 0:00.03 -bash Is this normal behaviour? And if so why? Please note that the user is not an Administrator account and is not able to sudo.

    Read the article

  • Redmine: reposman.rb succeeds, but does not make SVN repos available to projects

    - by Joey Adams
    I'm testing reposman.rb on the command-line (before I make it a cron job): /usr/sbin/reposman.rb --svn-dir=/var/svn \ --redmine-host=http://example.com/projects --key='redacted' \ --owner='nobody' --group='nobody' It succeeded, printing messages for projects that didn't have repos yet: repository /var/svn/project1 created repository /var/svn/project2 created And printed nothing after running the same command again, indicating it remembered the repos. However, if I look at the Repository settings in Redmine for project1 and project2, they aren't set. Although the SVN repo is created, the Redmine projects aren't configured. How do I get reposman.rb to automatically configure Redmine projects to use the repos after they're set up?

    Read the article

  • Restore e-mail from "junk e-mail" folder

    - by soonts
    I'm using MS Office Outlook 2007. I check my "Junk E-mail" folder ~once a month. This time I found that my whole google groups correspondence is there. I right-clicked a message and pressed "Add sender to safe senders list". Hopefully, the future messages to this group will be delivered to my inbox. The problem is with the messages that are already in the junk folder. Is there a way to resort my junk folder automatically, to have e-mails from senders listed on my safe list being recovered from the junk folder and moved to e.g. Inbox folder ? Thanks in advance.

    Read the article

  • Confusion about HSRP Groups

    - by Kyle Brandt
    If I have a router that has several LANs on it, and each of these LAN is attached to a second router, do I need to use different HSRP groups for each LAN? With this set up, each virtual gateway will be on a Layer 2 segment. And within a router, no interface will have multiple gateways. So, For example: Router 1: F0/0: ip address 192.168.1.2 255.255.255.0 standby ip 192.168.1.1 F2/0: ip address 192.168.2.2 255.255.255.0 standby ip 192.168.2.1 Router 2: F0/0: ip address 192.168.1.3 255.255.255.0 standby ip 192.168.1.1 F2/0: ip address 192.168.2.3 255.255.255.0 standby ip 192.168.2.1 Will this work, or do I need standby 1 ip 192.168.2.1 on the F2/0 interfaces? Since according to the RFC, the group number of the packet is in the HSRP multicast packets, my guess is that I don't need different groups, and that multiple groups are only needed when they are all on the same Layer 2 segment. However, I haven't been able to find this setup....

    Read the article

  • Bootstrapping in CloudFormation with Autoscale

    - by PapelPincel
    My CloudFormation template creates an autoscale group and bootstrap it with utility script /opt/aws/bin/cfn-init. When I remove the bootstrap part out of my template the, autoscale get created without any problem, but I add it the CloudFormation Stack fails and add line in /var/log/cloud-init.log : Error: AutoScalingGroupName does not specify any metadata The line above appens right after the following command : /opt/aws/bin/cfn-init --verbose --configsets orderedConfig --region us-east-1 --stack AS15 --resource AutoScalingGroupName --access-key XXXXXXXXXXXXX --secret-key XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX Digging a little bit deeper, in cfn-init I added the following lines the point where it exit: from pprint import pprint pprint(vars(detail)) and I get the following trace when running the previous cfn-init command : {'_description': None, '_lastUpdated': datetime.datetime(2012, 7, 12, 14, 52, 42), '_logicalResourceId': u'AutoScalingGroupName', '_metadata': None, '_physicalResourceId': u'AS15-AutoScalingGroupName-HNPOXXXXXXXX', '_resourceStatus': u'CREATE_COMPLETE', '_resourceStatusReason': None, '_resourceType': u'AWS::AutoScaling::AutoScalingGroup', '_stackId': u'arn:aws:cloudformation:us-east-1:XXXXXXXXXXXXX:stack/AS15/XXXXXXXX-cc30-11e1-XXXXXX-XXXXXXXXXX', '_stackName': u'AS15'} As you can see, the metadata field is empty and that's the reason why it fails to create the stack. Is there any known side effects for cfn-init when used with autoscale ?

    Read the article

  • What amount of physical RAM would a typical "commodity class" server have, as of late 2013?

    - by marathon
    I'm trying to spec out servers for my company's infrastructure group to build. They tell me anything more than 2GB is too much, which I find ridiculous considering cheap DRAM is about 15 bucks a dimm in bulk and our particular software runs better with more memory. I tried to find out how much google servers use, and pinning down a number is hard. Best I could find in a google research paper was that in 2008, their commodity servers were using 2 and 4GB dimms, but the paper never said how many. I realize "commodity server" is a vague term, but I'm just looking for a rough range in RAM used. I suspect at least 16GB is going to be the norm.

    Read the article

  • VPN/AFP server for centralized TimeMachine backups

    - by Keith Johnson
    I am a sysadmin for a small group of about 7 people who prefer Apple machines for their work. These machines are currently either a) not backed up at all, or b) backed up using Retrospect(Which I'm not very fond of). I don't really have the budget for anything fancy, and I'd like to keep it as user friendly as possible. Ideally I am thinking of a VPN server they can connect to(to keep the traffic secure, and because they work from home frequently) along with an AFP server for use with TimeMachine. The goal would be to get better backup coverage, along with user-initiated restores and overall ease of use. Does this seem like a reasonable idea? Has anyone done this before? Are there any obvious problems I've overlooked?

    Read the article

  • ubuntu server slowly filling up

    - by Crash893
    We had our samba server (ubuntu 8.04 ltr) share fill up the other day but when I went to look at it I cant see any of the shares have to much on them we have 5 group shares and then each users has an individual share one users has 22gigs of stuff a few others have 10-20mb of stuff and everyone else is empty so maybe like 26gigs total I deleted a few files yesterday and freed up about 250mb of space today when i checked it it was completely full again and i deleted some older files and freed up about 170mb of stuff but i can watch it slowly creep down in free space. I keep running a df -h Filesystem 1K-blocks Used Available Use% Mounted on /dev/sda1 241690180 229340500 169200 100% / varrun 257632 260 257372 1% /var/run varlock 257632 0 257632 0% /var/lock udev 257632 72 257560 1% /dev devshm 257632 52 257580 1% /dev/shm lrm 257632 40000 217632 16% /lib/modules/2.6.24-28-generic /volatile what can I do to try to hunt down whats taking up so much of my hdd? (im fairly new to unix in general so i apologize if this is not well explained)

    Read the article

  • ubuntu server slowly filling up

    - by Crash893
    We had our samba server (ubuntu 8.04 ltr) share fill up the other day but when I went to look at it I cant see any of the shares have to much on them we have 5 group shares and then each users has an individual share one users has 22gigs of stuff a few others have 10-20mb of stuff and everyone else is empty so maybe like 26gigs total I deleted a few files yesterday and freed up about 250mb of space today when i checked it it was completely full again and i deleted some older files and freed up about 170mb of stuff but i can watch it slowly creep down in free space. I keep running a df -h Filesystem 1K-blocks Used Available Use% Mounted on /dev/sda1 241690180 229340500 169200 100% / varrun 257632 260 257372 1% /var/run varlock 257632 0 257632 0% /var/lock udev 257632 72 257560 1% /dev devshm 257632 52 257580 1% /dev/shm lrm 257632 40000 217632 16% /lib/modules/2.6.24-28-generic /volatile what can I do to try to hunt down whats taking up so much of my hdd? (im fairly new to unix in general so i apologize if this is not well explained)

    Read the article

  • Windows 7 offline files - work temporarily offline even if network connection works

    - by Robert
    Sometimes I am connected via VPN to a network containing the server where files are stored which are cached by Windows offline files feature. Sometimes the connection works good and working this way is not a problem - on other times working is quite a pain because of high latency when working with the files in the Windows explorer. Is there an interactive way how a user (with admin permissions) can temporary suspend online usage of offline files? I already activated the "Transparent caching" group policy feature (Computer Configuration Policies Administrative Templates Networks Offline Files) with a network latency of 200msec but from my experience even if I get ping times to the file server of less than 40msec online usage is quite tenacious. Setting low latency times at this point causes the offline files often to toggle which makes problems with some applications working with several files and requires them to be consistent (like SVN client).

    Read the article

  • Can Subgroups be created in Gitlab?

    - by niroshan.l
    We are working on migrating from gitolite to gitlab , But have encountered a problem with subgroups which are created on git. It looks like there is no feature in gitlab to create a subgroup eg: in Git group1/group2/project1.git group1/project2.git group1/group2/project3.git group1/group3/project4.git It looks like when i import the repo's using bundle it is not able to identify the sub groups. Also there is no option to create a sub group on the gitlab UI. (Maybe i'm not looking at the proper terminology. maybe subgroub is not the correct work ) Apologies for the use of incorrect terms as I am new to this Thanks in advance Regards Niro

    Read the article

  • I want to replace Windows Server by Samba 4 for authentication purpose [closed]

    - by garden air
    I am using windows Server 2008 in a mediam size organization. I have 200 clients at the moment. I have learn that Samba4 has a remarkable changes for windows clienmts. I want to use Samba 4 to authenticate my windows xp and windows 7 clients machines. In my home based pc I am using vmware to test samba 4. For that purpose I am using CentOS 6.3 with Samab4. Kindly guide me taht it is possible that I replace windows server as a PDC controller and use samba4 to onlu authenticate my clients. I does not want Group Policy only and only authentication . So what u suggest me. Is it good to use samba 4 instaled of windows server. thanks, gardenair

    Read the article

  • How to assign permissions to ApplicationPoolIdentity account

    - by Triynko
    In IIS 7 on Windows Server 2008, application pools can be run as the "ApplicationPoolIdentity" account instead of the NetworkService account. How do I assign permissions to this "ApplicationPoolIdentity" account. It does not appear as a local user on the machine. It does not appear as a group anywhere. Nothing remotely like it appears anywhere. When I browse for local users, groups, and built-in accounts, it does not appear in the list, nor does anything similar appear in the list. What is going on? I'm not the only one with this problem: see Trouble with ApplicationPoolIdentity in IIS 7.5 + Windows 7 for an example. "This is unfortunately a limitation of the object picker on Windows Server 2008/Windows Vista - as several people have discovered it already, you can still manipulate the ACL for the app-pool identity using command line tools like icacls."

    Read the article

  • Disable Bing search entirely from IE when using sysprep

    - by takesides
    I'm creating a Windows Server 2012 R2 image for our product (an all-in-one appliance installed on server hardware) and I am having incredible trouble with getting Bing removed from Internet Explorer. Our clients do not want Bing at all, not as a default search, not as a "search from address bar" feature, not in any sense. I've managed to configure everything else that I want in IE using WAIK to generate an unattend XML file but this has me completely stuck. Ideally I want Bing removed from IE altogether but I would make do with it just being completely disabled. I can find no options in either WAIK or Group Policy to configure this as I need it. Any ideas gratefully received.

    Read the article

  • Ways to polling server status

    - by Yijinsei
    Hi guys, I create the same question is stackoverflow, but I was recommended to post my question here. So I apologies for those who saw this post twice. I am try to create a JSP page that will show all the status in a group of local servers. Currently I create a schedule class that will constantly poll to check the status of the server with 30 second interval, with 5 second delay to wait for each server reply, and provide the JSP page with the information. However I find this way to be not accurate as it will take some time before the information of the schedule class to be updated. Do you guys have a better way to check the status of several server within a local network?

    Read the article

  • SQL Server Performance & Latching

    - by Colin
    I have a SQL server 2000 instance which runs several concurrent select statements on a group of 4 or 5 tables. Often the performance of the server during these queries becomes extremely diminished. The querys can take up to 10x as long as other runs of the same query, and it gets to the point where simple operations like getting the table list in object explorer or running sp_who can take several minutes. I've done my best to identify the cause of these issues, and the only performance metric which I've found to be off base is Average Latch Wait time. I've read that over 1 second wait time is bad, and mine ranges anywhere from 20 to 75 seconds under heavy use. So my question is, what could be the issue? Shouldn't SQL be able to handle multiple selects on a single table without losing so much performance? Can anyone suggest somewhere to go from here to investigate this problem? Thanks for the help.

    Read the article

  • Unable to connect to sites using IIS7 Manager

    - by Phil.Wheeler
    I'm a developer who has been assigned the task of managing and configuring a new IIS7 instance on a remote server. My domain account has been added as to the local Administrators group on the box, but IIS7 has been configured to accept connections only from accounts with Windows credentials. I've added my domain account to the IIS Manager Permissions for one of my sites, but I'm still unable to connect to either that site, the IIS instance or the server in general from my local machine. There's obviously a missing element to the configuration of this setup but I don't know where to start looking. The event logs on the IIS box show audit failures for my account when trying to connect remote via the IIS7 Manager tool on my local machine. Suggestions gratefully received.

    Read the article

  • bad printer isolation on print server or better way?

    - by Joseph
    I have noticed that when a printer or driver screws up on a Windows server it usually locks up or kills the print spooler and everyone can't print until it is fixed. Usually we have to put the troublesome printer on another server so when it fails, it doesn't take the whole group with it. That is assuming we ever figure out which printer is the problem. Is there a way to have it so that one bad apple doesn't ruin the bunch? Even if it is another form of printer serving, that would work as long as it's not hard for the user to find a printer and install drivers.

    Read the article

  • postgres memory allocation tuning 2

    - by pstanton
    i've got a Ubuntu Linux system with 12Gb memory most of which (at least 10Gb) can be allocated solely to postgres. the system also has a 6 disk 15k SCSI RAID 10 setup. The process i'm trying to optimise is twofold. firstly a single threaded, single connection will do many inserts into 2-4 tables linked by foreign key. secondly many different complex queries are run against the resulting data, using group by extensively. this part especially needs to be optimised. i have four of these processes running at once in order to make use of the quad core CPU, therefore there will generally be no more than 5 concurrent connections (1 spare for admin tasks). what configuration changes to the default Postgres config would you recommend? I'm looking for the optimum values for things like work_mem, shared_buffers etc. relevant doco thanks!

    Read the article

  • Listing lines from just one file in DIFF

    - by justintime
    I would like to get (GNU)DIFF to printout only lines that are different in one file. So given ==> diffa.txt <== line1 line2 - in a only line3 line4 changed line5 ==> diffb.txt <== line1 line3 line4 changed in b line5 line6 in b only i would like diff --someoption diffa.txt diffb.txt to produce line2 - in a only line4 changed The following looks as though it should be helpful but it is a bit cryptic : --GTYPE-group-format=GFMT Similar, but format GTYPE input groups with GFMT. --line-format=LFMT Similar, but format all input lines with LFMT. --LTYPE-line-format=LFMT Similar, but format LTYPE input lines with LFMT. LTYPE is `old', `new', or `unchanged'. GTYPE is LTYPE or `changed'. GFMT may contain: %< lines from FILE1 %> lines from FILE2

    Read the article

  • fsck: FILE SYSTEM WAS MODIFIED after each check with -c, why?

    - by Chris
    Hi I use a script to partition and format CF cards (connected with a USB card writer) in an automated way. After the main process I check the card again with fsck. To check bad blocks I also tried the '-c' switch, but I always get a return value != 0 and the message "FILE SYSTEM WAS MODIFIED" (see below). I get the same result when checking the very same drive several times... Does anyone know why a) the file system is modified at all and b) why this seems to happen every time I check and not only in case of an error (like bad blocks)? Here's the output: linux-box# fsck.ext3 -c /dev/sdx1 e2fsck 1.40.2 (12-Jul-2007) Checking for bad blocks (read-only test): done Pass 1: Checking inodes, blocks, and sizes Pass 2: Checking directory structure Pass 3: Checking directory connectivity Pass 4: Checking reference counts Pass 5: Checking group summary information Volume (/dev/sdx1): ***** FILE SYSTEM WAS MODIFIED ***** Volume (/dev/sdx1): 5132/245760 files (1.2% non-contiguous), 178910/1959896 blocks Thanks, Chris

    Read the article

  • 8021x wireless clients auto connect prior to user login

    - by JohnyV
    I am using a 2008 r2 dc that also performs Radius (NPS), I also have a 2008 r2 certificate authority which is giving out certificates. The computers are getting the certificate and when a user logs into the device (that has previously logged in) gets put on the correct VLAN (according to there user access). However I cant get the computers to join the wireless network prior to logging in, so that they can log in with their domain accounts and authenticate through the wireless. The basic setup is Computer gets group policy which tells it to get a certificate the computer then has a seperate vlan to join just as a computer account however the wireless computer wont connect through that vlan. (this vlan allows login information only then once the users credentials are verified it puts them onto another VLAN). So I am trying to work out why the notebook wont auto connect to the wireless network as a computer. Thanks

    Read the article

  • Export GFI MailArchiver e-mails for import into Exchange 2010 SP1 Personal Archiving

    - by pk.
    We have an existing installation of GFI MailArchiver 5 with several databases of archives (perhaps 100-150GB). The goal is to export each user's archived e-mail and then import it into Exchange 2010 SP1 Personal Archives. GFI has a tool to do this, but it's very rudimentary and has severe, frankly unworkable, limitations. It only allows me to query based on the e-mail headers. Due to the fact that we have multiple aliases that may show in multiple headers (To:, Cc:), not to mention the fact that this won't cover a user's membership in a distribution group at a given point in time, this tool will not suffice. Another option is for me extract the e-mails from the GFI databases without using the tool, but this would require me to write my own tool to reconstruct them and I really would rather not go down that path. I feel very stuck on this issue. Has anyone here done a similar migration? How can this best be handled?

    Read the article

< Previous Page | 369 370 371 372 373 374 375 376 377 378 379 380  | Next Page >