Search Results

Search found 22998 results on 920 pages for 'supervised users'.

Page 763/920 | < Previous Page | 759 760 761 762 763 764 765 766 767 768 769 770  | Next Page >

  • Adventures in Drupal multisite config with mod_rewrite and clean urls

    - by moexu
    The university where I work is planning to offer Drupal hosting to staff/faculty who want a Drupal site. We've set up Drupal multisite with clean urls and it's mostly working except for some weird redirects. If you have two sites where one is a substring of the other then you'll randomly be redirected to the other site. I tracked the problem to how mod_rewrite does path matching, so with a config file like this: RewriteCond %{REQUEST_URI} ^/drupal RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ /drupal/index.php?q=$1 [last,qsappend] RewriteCond %{REQUEST_URI} ^/drupaltest RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^(.*)$ /drupaltest/index.php?q=$1 [last,qsappend] /drupaltest will match the /drupal line and all of the links on the /drupaltest page will be rewritten to point to /drupal. If you put the end of string character ($) at the end of each rewrite condition then it will always match on the correct site and the links will always be rewritten correctly. That breaks down as soon as a user logs in though because the query string is appended to the url so just the base url will no longer match. You can also fix the problem by ordering the sites in the config file so that the smallest substring will always be last. I suggested storing all of the sites in a table and then querying, sorting, and rewriting the config file every time a Drupal site is requested so that we could guarantee the order. The system administrator thought that was kludgy and didn't address the root problem. Disabling clean urls should also fix the problem but the users really want them so I'd prefer to keep them if possible. I think we could also fix it by using an .htaccess file in each site to handle the clean url rewriting but that also seems suboptimal since it will generate a higher load on the server and the server is intended to host the majority of the university's external facing web content. Is there some magic I can do with mod_rewrite to get it to work? Would another solution be better? Am I doing something the wrong way to begin with?

    Read the article

  • printer assignments for windows xp workstations within an active directory environment

    - by another_netadmin
    I'm using the following script to handle removing any old networked printers from machines and then assigning the propper ones and making one of them the default. This script is assigned to the OU the workstations reside in and uses group policy loopback so all users that login will get the appropriate printers mapped for them. I tried to use the new Printer Manager as part of W2K3 R2 but when assigning the default this way I get an error that the printer doesn't exist so I'm back to using the script. One flaw that I'm noticing is that it won't remove any printers that happen to be mapped from an RDP session (we don't see this everywhere but there are a few locations). Is there any way to enumerate all RDP printers and remove them similar to how I'm enumerating and removing networked printers? ' ' Printers.vbs - Windows Logon Script. ' RemovePrinters AddPrinters Sub RemovePrinters() On Error Resume Next Dim strPrinter Set objNetwork = WScript.CreateObject("WScript.Network") Set colPrinters = objNetwork.EnumPrinterConnections For i = 0 to colPrinters.Count -1 Step 2 strPrinter=CStr(colPrinters.Item(i+1)) If Not InStr(strPrinter,"\\") = 0 Then objNetwork.RemovePrinterConnection strPrinter, True, True End If Next End Sub Sub AddPrinters() On Error GoTo 0 Set objNetwork = CreateObject("WScript.Network") objNetwork.AddWindowsPrinterConnection "\\printers1\JH120-DELL5310" objNetwork.SetDefaultPrinter "\\printers1\Jh120-DELL5310" End Sub

    Read the article

  • How can I start hostednetwork on Windows 7?

    - by Pirozek
    When I type in admin console command to start hostednetwork netsh wlan start hostednetwork it gives me this: The hosted network couldn't be started. The group or resource is not in the correct state to perform the requested operation. There is a hotfix from Microsoft but it didn't help me. Any advice? C:\Users\Pirozek>netsh wlan show driver Interface name: Wireless Network Connection 3 Driver : D-Link AirPlus DWL-G520 Wireless PCI Adapter(rev .B) Vendor : Atheros Communications Inc. Provider : Atheros Communications Inc. Date : 8.7.2009 Version : 8.0.0.171 INF file : C:\Windows\INF\oem108.inf Files : 2 total C:\Windows\system32\DRIVERS\athrx.sys C:\Windows\system32\drivers\vwifibus.sys Type : Native Wi-Fi Driver Radio types supported : 802.11b 802.11g FIPS 140-2 mode supported : Yes Hosted network supported : Yes Authentication and cipher supported in infrastructure mode: Open None Open WEP-40bit Shared WEP-40bit Open WEP-104bit Shared WEP-104bit Open WEP Shared WEP WPA-Enterprise TKIP WPA-Personal TKIP WPA2-Enterprise TKIP WPA2-Personal TKIP Vendor defined TKIP WPA2-Enterprise Vendor defined Vendor defined Vendor defined WPA-Enterprise CCMP WPA-Personal CCMP WPA2-Enterprise CCMP Vendor defined CCMP WPA2-Enterprise Vendor defined Vendor defined Vendor defined WPA2-Personal CCMP Authentication and cipher supported in ad-hoc mode: Open None Open WEP-40bit Open WEP-104bit Open WEP WPA2-Personal CCMP

    Read the article

  • McAfee ePolicy-Orchestrator (ePO) - policy ownership by groups?

    - by bkr
    Is there a way to grant ownership of an ePO policy to a group? Alternatively, is there a permission that can be set that would allow owners of an ePO policy to add other owners to that policy without making them ePO admin? In the case I'm looking at, ePO is deployed within a large heterogeneous organization with a large amount of delegation in the form of create/modify policy rights to allow multiple IT departments to customize to their needs for their sections of the system tree. The problem is that the policies are owned by the creator of the policy. This causes problems when they leave (staff turnover) or when other people on their teams need the ability to modify the existing policy. Unfortunately, as far as I can see, only someone who is an ePO admin can change the owners. Even the owner of the policy cannot add other owners (unless they are also an ePO admin). Ideally, I should be able to assign ownership of a policy to a group - since that would be easier to manage than me or another admin having to continually fix policy ownership or remove orphaned polices. Even just allowing the owners of the polices to add other owners would be sufficient. How are other people handling policy ownership when dealing with a large amount of delegated control of polices? Is there a way to delegate this out without making users full ePO admins?

    Read the article

  • postfix revived and delivered have the same values (?)

    - by thinkingbig
    I have configured my first server (Debian with ISPConfig). Generally i want to send bulk e-mails to our users, i configure postfix and turn on postfix... but... After 1 hour of sending emails i have logs like this: Grand Totals messages 21886 received 21883 delivered 0 forwarded 0 deferred 234 bounced 0 rejected (0%) 0 reject warnings 0 held 0 discarded (0%) 30805k bytes received 31280k bytes delivered 3 senders 3 sending hosts/domains 12588 recipients 3 recipient hosts/domains Per-Hour Traffic Summary time received delivered deferred bounced rejected -------------------------------------------------------------------- 0000-0100 0 0 0 0 0 0100-0200 0 0 0 0 0 0200-0300 0 0 0 0 0 0300-0400 0 0 0 0 0 0400-0500 0 0 0 0 0 0500-0600 0 0 0 0 0 0600-0700 0 0 0 0 0 0700-0800 0 0 0 0 0 0800-0900 0 0 0 0 0 0900-1000 0 0 0 0 0 1000-1100 0 0 0 0 0 1100-1200 0 0 0 0 0 1200-1300 0 0 0 0 0 1300-1400 0 0 0 0 0 1400-1500 0 0 0 0 0 1500-1600 15311 15306 0 168 0 1600-1700 6575 6577 0 66 0 1700-1800 0 0 0 0 0 1800-1900 0 0 0 0 0 1900-2000 0 0 0 0 0 2000-2100 0 0 0 0 0 2100-2200 0 0 0 0 0 2200-2300 0 0 0 0 0 2300-2400 0 0 0 0 0 Host/Domain Summary: Message Delivery sent cnt bytes defers avg dly max dly host/domain 21521 30353k 0 3.4 m 15.5 m wp.pl 355 919k 0 54.9 s 13.0 m mysenderdomainexample.pl 7 8477 0 1.7 s 1.9 s prokonto.pl Host/Domain Summary: Messages Received msg cnt bytes host/domain 21879 30786k mysenderdomainexample.pl 5 16196 mx4.wp.pl 1 3200 mx3.wp.pl Senders by message count 21783 [email protected] 96 [email protected] 6 from=< **So, my question is: 1) Why i have recived and delivered have the same values (approx)? 2) How can I check if an email has been delivered? 3) How to change default "root" and "www-data" user (FROM / RETURN PATH) to another? I have changed this in script, but postfix ignore scripting values and send every mail from root (we have .php send cron's in /etc/crontab) 4) WHY APPROX 100 % MAILS RECIVED HAS BEEN ADRESED TO MY SENDER HOST? Host/Domain Summary: Messages Received Waiting for respond, Regards TB**

    Read the article

  • How do I enable Ubuntu Gnome system tools

    - by RussellW
    I am running Ubuntu 10 with Gnome 2.30.2. This is a VMWare workstation image provided by another company that I do not have support in this regard. I am trying to access the graphical tools for configuring the network, users, and services but the System-Administration menu does not have these options listed. The main issue I am trying to solve is to correct the problems with the gnome menu options and network settings I have the gnome-system-tools package installed, and I am unable to run command-line versions of the tools, such as nm-applet (I get no GUI if I run that command, the process is running in the background). I realize that I can perform many tasks command-line, but I would like to use the GUI for administrative functions as I am not overly proficient for all command for restarting services and setting a static IP with a specific gateway. Further, I can run gnome-nettool, but I cannot change the IP, I can only see my network card. nm-connection-editor does not show any network cards that I can configure to change the IP. Currently, I am getting a DHCP through my NAT in VMWare, I want to set it to a specific IP address though. Preferences Menu (note some missing options) ![Preferences Menu][1] Admin Menu (note some missing options) ![Admin Menu][2] Network Tools (I can view but not change IP address) ![Network Tools][3] Network Settings (Unable to change IP address) ![Network Settings][4] Network Connections (no connections listed, not even my existing ethernet NAT connection through VMWare) ![Network Connections][5] See images here that I have referenced: 1- http://i.imgur.com/kl8pP.png 2- http://i.imgur.com/K3Cjz.png 3- Iq7Xb.png 4- 7wheV.png 5- J2ad8.png

    Read the article

  • Why does Google Analytics use two domains?

    - by AKeller
    I'm building a distributed widget that is comparable to Google Analytics. Users will add a <script> tag to their site that references my widget's JavaScript file. The Google Analytics tracking code looks like this: var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-XXXXXXXX-X']); _gaq.push(['_trackPageview']); (function () { var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true; ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s); })(); Can anyone explain the reasoning behind separate HTTP and HTTPS hostnames? My instinct is to just secure the www address and then use the protocol-less syntax, like //www.google-analytics.com/ga.js. But I'm sure the Google Analytics architects put a lot of thought into this approach. I'd love to understand their logic before I follow/ignore their model.

    Read the article

  • Combat server downtime by duplicating server and re-routing when main server is down

    - by Wasim
    I have a CentOS server which at times either crashes or gets attacked with DDOS. At the moment I have an off site backup which is filled up with 1.7TB of data. I'm currently paying as much for the backup as I am for the server and I was looking for advice from experienced people as to what option is best to proceed from here. Would it be a viable solution to ditch the offsite backup, and instead purchase an additional server which is an exact duplication of the first server. So if the first server is down, users are re-routed to the second server without noticing the first server is even down. This would create an automatic backup of the first server (albeit not offsite) and relinquish the need for the expensive offsite backup. Is the above solution a true solution to pricey backup or is offsite backup absolutely necessary? How would I go about doing this (obviously it's pretty complex so just links to some reading material or the terminology of the procedure would be great)? Appreciate the help and advice.

    Read the article

  • Utilities for finding x/y screen coordinates

    - by slothbear
    I have an application that needs the user to enter the x and y location of a couple of items on the screen. [Yes, this is crap and I'll replace it, but for now...] On Mac OS X, I'm using Snapz Pro X. When I choose to take a snap of a selection, the interface displays the mouse cursor location. This is OK for my personal use, but I can't ask users to buy a $69 program for this function. I thought I had a solution with the built-in program "Grab", but it reports the coordinates from the bottom left, and I need it from the top left. I don't have access to Windows; not sure what to use there. I've not even been able to come up with good search terms since mouse and coordinates and x/y are so common. Ideas are welcome there. Extra credit: same x/y finder for Linux. A Java program would be OK too, since the main application is in Java.

    Read the article

  • Hybrid Exchange Online setup with on premise public folders, certificate issues?

    - by exxoid
    We have a Hybrid Exchange setup with Exchange Online (v15 tenant) and Exchange 2010 on premise. The hybrid configuration for the most part is working, what I am having an issue with is getting public folders to work for cloud users. I followed the official documentation here (http://technet.microsoft.com/en-us/library/dn249373(v=exchg.150).aspx) and it kind of works. When I am accessing Outlook on a public wifi I am able to bring up the cloud mailboxes and on premise public folders show up in Outlook. When I am accessing email via Outlook as a cloud user on the same LAN as the on premise exchange, the cloud user makes the outlook.com connection for live/ad/archive mailbox but fails to create a proxy connection for the on premise public folders. The error I get is a certificate mismatch, it seems that when a user on the LAN accesses Outlook/Exchange it is using a different certificate vs. when Outlook is launched on a WiFi network. When I look at the Outlook connection information, I see the connection to outlook.com for ad/live/archive mailbox but no entry for public folder connection. Our on premise Exchange is 2010 SP3 with latest CUs. The client is a domain joined laptop with Windows 7 and Office 2010 SP2, latest windows updates applied. Our infrastructure has a working ADFS 3 and DirSync setup for Office 365. My question then is, what do I need to do to make sure that the Cloud user launching Outlook on the LAN uses the proper certificate (the wildcard 3rd party cert.. vs. the self signed certificate which it looks like it may be using during the connection attempt).

    Read the article

  • Outlook 2007 panes keep moving when changing resolution

    - by SilverbackNet
    This problem is really bugging one of our users ever since he got a larger monitor. Now that the monitor has a different resolution than his laptop, every time he unplugs it to go home, the three Outlook panes get all jumbled up. The navigation is huge, the list is shoved over, and the reading pane is almost smushed out of existence, the the opposite when he comes back in and the reading pane fills the screen. He's sick of adjusting it every day. He always runs it maximized, for maximum reading area. Keeping the application within a 1024x768 window wouldn't really be an option for him. Is there any way built into Outlook to automatically adjust pane sizes when the resolution changes? If not, is there a third-party app that can help, or a way to script the changes into the registry somehow? (I can do running the script whenever the screen state changes.) If this is fixed in 2010 I might be able to convince the other admin that this is a good enough reason to allow it (which will require a new beta version of our archiving software).

    Read the article

  • Best method to redirect internal DNS to external website?

    - by ProfessionalAmateur
    We host several web based applications outside of intranet. The URL's to these applications are long, complex and overall not user friendly. Ex: http://hostingsite:port/approot/folder/folder/login.aspx <-- (production) http://hostingsite:port22/approot/folder/folder/login.aspx <-- (dev) http://hostingsite:port33/approot/folder/folder/login.aspx <-- (test) I'd like to create an internal DNS entry to allow users to access these sites with ease. Ex: http://prod --> http://hostingsite:port/approot/folder/folder/login.aspx http://dev --> http://hostingsite:port22/approot/folder/folder/login.aspx I'm not familiar with the DNS process and setup, as far as I know a DNS can only be redirected to an IP, but not to subdomains for directory paths as described above? Is this a correct assumption? I am thinking for throwing up an internal webserver that will listen to the internal DNS entries and redirect to the external sites. http://prod --> [internal webserver] --> redirect --> http://hostingsite:port/approot/folder/folder/login.aspx Is there a better way to do this?

    Read the article

  • Industrial strength cloud file storage

    - by ArthurG
    I'm looking for an industrial strength cloud file storage system. It will be used by multiple people in a startup. Our requirements: Transparent file system access: files and folders in the file system must be able transparently access (read and write) files in the cloud; files must be synchronized whenever network access is available and buffered otherwise. The system must be usable by non-technical people. Access control: we need to control who can access which files, at least on a very coarse basis. e.g., the developers will be able to access the system design documents, only the corporate folks can access recruiting documents, and only management can access certain corporate documents. Dropbox provides this via Sharing folders, but that's not adequate, if I understand it correctly, because there's no authentication of the sharing user. so the cloud service should have a notion of an account (our startup) with multiple users with distinct credentials and rights for each user Clients: it must be accessible from Macs and PCs; I would hope that it supports Linux (e.g., Ubuntu) too Security: it must provide robust security Backup: the cloud service must reliably backup the files Versioning: change version history, is a big plus, but not required Not free: we're willing to pay for the service So far, we've reviewed the following, albeit not completely thoroughly: Dropbox: has all except 1) Access control, which is provided via Sharing folders, but that's not adequate, if I understand it correctly, because there's no authentication of the sharing user. and 2) Security, as discussed here http://www.economist.com/blogs/babbage/2011/05/internet_security and here http://blog.dropbox.com/?p=821. Windows Live Mesh, has all except 1) Clients, only supporting Windows 7 and OS X. SpiderOak has all, except 1) Transparent file system access, which is only available for 1 user. Amazon Cloud, doesn't offer 1) Transparent file system access Rackspace Cloud Drive has all except 1) Access control and 2) Versioning I'll gladly include any clarifications or additional systems the community provides. Arthur

    Read the article

  • MS Word TOC that references # pages rather than page number

    - by buttonsrtoys
    We frequently need to write specifications in Word which require a TOC that refers to the total number of pages in a section, rather than the page number. E.g., Section No. Pages 01010 Summary of Work..............5 01025 Prices.......................2 01400 Quality Control..............1 01700 Contract Close Out...........2 A wrinkle is that each section is a separate file. To date, we've been writing or TOC by hand, which has introduced every error imaginable. Is there an MS feature that populates a TOC with page totals? If not, I've done a little VB in Office, so wouldn't be opposed to that route as need be, as long as it was usable by our low tech users. Related question - all the section files are in the same folder. It would be nice if the TOC loaded every file in a folder, rather than having to specify each one. Is this a feature of Word or would this require VB? We tried a master document with links to subdocuments, but since the number of section files ebbs and flows with each project, the approach required too much maintenance for our Wordophobes.

    Read the article

  • Apache LDAP with local groups

    - by Greg Ogle
    I have a server that currently uses htpasswd to authenticate users. I'm migrating to using LDAP, but my LDAP server is only for user authentication, not allowing me to add groups. I still need to use groups as they are used for access control via the Apache Directory tags in my configuration. The alternative is to revisit the access control altogether, using php or something of the sort to limit access. this works for 'basic' authentication <Directory /misc/www/html/site> #LDAP & other config stuff irrelevant to issue Require ldap-group cn=<service>,ou=Groups,dc=<service>,dc=<org>,dc=com </Directory> attempted <Directory /misc/www/html/site> #LDAP & other config stuff irrelevant to issue #groups file from previous configuration using htpasswd #tried to tweak to match new user format, but I don't think it looks up in here AuthGroupFile /misc/www/htpasswd/groups #added the group, which is how it works when using htpasswd Require ldap-group cn=<service>,ou=Groups,dc=<service>,dc=<org>,dc=com group xyz </Directory>

    Read the article

  • Client cannot access my IIS7 web server

    - by Soccerwiz
    I have a Windows 2008 web server on running IIS 7 with about 25 websites. One of those sites is an SaaS application that is accessed constantly throughout the day. However, one particular client keeps getting blocked from my server. They will be using the service, and then all of a sudden they cannot access the program, or any other site on the server. The entire office of 4 users is blocked from accessing anything on the web server. A trace route reveals they get all the way to the server before they are blocked. However, they can access a linux server that is a different VM with a different IP on the same physical server. Also, when they are blocked from their office, they can still access the site from their mobile phone or local Starbucks. They can also occasionally reset the router and gain access to the web server again as they are on a dynamic IP address. I checked IIS and allow all IPs to access the server. There is nothing in the logs the says anything about a user being banned. I really have no idea what is causing this? Could it be a virus on their end? I have even moved the SaaS to a completely new server in a different location, and they were working fine for about a month, and then the problem started occurring again. Are there any hidden blacklists in IIS? Or is it a routing issue on their end?

    Read the article

  • Re-packaging commercial software into RPM packages

    - by gac
    The situation is this - I have a small CentOS 5 "cluster" (currently 7 machines, but potential for more) which run a commercially available software package that's distributed essentially in tarball format (it's actually a zip file with a mixture of Windows/Linux binaries and an installation shell script with no potential for automation). I'd like to re-package this somehow into an RPM package (ideally that I can throw onto a self-hosted yum repository) in order to keep these "cluster" machines both up to date and consistent. I could do 7 manual installations, but there's scope for error. As I understand it, I'll need to accomplish the following tasks: add a non-privileged user to the target system for running the daemon without unnecessary root privileges package the binary files themselves up from the final installation location on a separate build machine (probably under /opt/package for sanity's sake). No source is available. add a firewall hole in order for the end-users to be able to communicate with the "cluster" nodes add a cron task which can start the daemon on @reboot I'm coming up with plenty of good packaging resources so far, but all are based on the traditional method (i.e. if I were the vendor packaging up my source files), rather than re-packaging a ton of binary files from an already-installed instance of the application, which is the only option available to me. Anyone have any good resources they can share for achieving this goal? Thanks!

    Read the article

  • Can't delete ntuser.dat file to remove profiles after reboot

    - by Matrix Mole
    I've ran into an issue where some servers will not release the handle on the ntuser.dat file even after a reboot. Or quite possible, after the reboot, the ntuser.dat file is getting re-loaded into memory. The user accounts are definitely not being accessed (some of them belong to users that have not been with the company in over a year). It seems to be on Windows 2003 servers, but I can't be 100% certain that there aren't some 2000 servers showing this issue as well. When I try to use process explorer or handle.exe from sysinternals to kill the handle on these ntuser.dat files, the handle remains open and connected. Handle.exe even reports that the handle was broken while it remains in use. I've even taken ownership on the file and tried to kill the handle to no effect (windows shows I have ownership of the file, but still refuses to release the handle). I have looked into the registry to see if I can discover where the files may be getting loaded at. Unfortunately, the username is appearing in too many places for me to be certain which one is actually loading their reg file into memory. Any suggestions on how I can either break the handle on the files, or prevent them from getting re-loaded after a reboot?

    Read the article

  • Server 2008 R2 & Domain Trusts - Attempt to Compromise Security

    - by SnAzBaZ
    We have two separate Active Directory domains; EUROPE and US. There is a two way trust between the domains / forests. I have a group of users called "USA Staff" that have access to certain shares on servers in the EUROPE domain and a group called "EUROPE Staff" which have access to shares in the USA domain. Recently the USA PDC was upgraded to Windows Server 2008 R2. Now when I try to access a share on a USA server from a Windows 7 workstation in the EUROPE domain I get the "Please enter your username / password" dialog box appear, with a message at the bottom: "The system has detected a possible attempt to compromise security." When I enter a username / password for a user in the USA domain, I can then access the network resource. Entering credentials for a EUROPE user however does not give me access, even though my NTFS and Share permissions are set to allow that. Windows Server 2003 / Windows Server 2008 did not have this problem, it seems to be unique to R2. I found KB938457 and opened up port 88 on the Server 2008 R2 firewall but it did not make any difference. Any other suggestions as to what to turn off in R2 to get this working again ? Thanks

    Read the article

  • What to do after a fresh Linux install in a production server?

    - by Rhyuk
    I havent had previous experience with the 'serious' IT scene. At work I've been handed a server that will host an application and MYSQL (I will install and configure everything), this will be a productive server. Soon I will be installing RHEL5 to it but I would like to know like, if you get a new production server, what would be the first 5 things you would do after you do a fresh Linux install? (configuration/security/reliability wise) EDIT: Added more information regarding the server enviroment and server roles: -The server will be inside my company's intranet/firewall. -The server will receive files (GBs) in binary code from another internal server. The application installed in this server is in charge of "translating" all that binary into human readable input. Server will get queried to get this information. -Only 2-3(max) users will be logging in. -(2) 145GB HDs in RAID1 for the OS and (2) 600GB HDs in RAID1 also for data. I mean, I know I may not get the perfect guideline. But at least something thats better than leaving everything on default.

    Read the article

  • Creating a Logical network diagram

    - by user273284
    Im a student and I have been assigned the task of creating a logical network diagram for the following scenario There are 2 buildings, the first is the head office and the second is the branch. The data centre is in the head office, it contains domain controller, mail server, file server and a web server. it provides wired and wireless access to the staff. the branch building is new and it does not have a network. The two buildings must be connected using a VPN connection. The branch building will not have any servers but just network devices that will provide the connectivity, the users in the branch building will be connected to the head office over the VPN. I had created a diagram based on this scenario, but my teacher rejected it saying that it does not follow Cisco hierarchical Model and the servers were not placed correctly in the diagram. I just wanted some help in this matter so that I Can create my network diagram correctly. If anyone could upload a picture of how the logical diagram should be for this scenario will be helpful, any other resources would also be great.

    Read the article

  • Export files to remote server using TortoiseSVN

    - by Matt
    Hi, I'm using TortoiseSVN to keep revisions of my code. When I commit changes, I take note of what files have changed and upload them to my server using FTP. Here's my workflow: Edit files on local computer (eg. files in C:\Users\Me\web) Commit changes to local repository using rightclick- TortoiseSVN- SVN Commit. Take the files, open FileZilla (FTP client) and upload the files to a remote server. I was wondering if there was a way in which I could omit step 3 from my workflow. Basically I would like the changed files to be automatically uploaded to the remote server when I commit a version to the repository. Information about my computer environment: Windows 7 Ultimate x64 with TortoiseSVN x64 Notepad++ text editor Files edited are PHP, CSS, JS, HTML, etc. Server is running Linux with PHP 5.2 and MySQL. FileZilla is used to upload files. I can connect to the server via SSH if that is needed. Thank you in advance.

    Read the article

  • Siege - running a stress test benchmark

    - by morgoth84
    I need to do a benchmark test of a HTTPS server using Siege, to see how it behaves under massive load. I'm initiating tests from another machine which is quite powerful and it is connected to the same physical switch the server is connected on. But when I initiate a test, I can't get it to make more than 170 requests per second. With this load the server's CPU usage is at 15-20% and the average response time for a request is approx. 0.03 seconds. Load of the client machine is approx. at 10%. So, I gradually increase the number of users in Siege (the number of worker threads) and request rate linearly increases up to 170 reqs/sec, but it never gets over it. No matter how many more worker threads I start, the load on the server is never more than 20% (and the client's load also doesn't increase any more). How can I overcome this? I've googled a bit and found out that after a request is completed, a socket associated with one ephermal port remains in WAIT_TIME state for some time during which it can't be reused. I tried to overcome this by doing these things: sysctl -w net.ipv4.ip_local_port_range="1024 65535" echo 1 > /proc/sys/net/ipv4/tcp_tw_recycle Oh, and the client machine is a Linux (RedHat, I think, but I'm not sure). Any help would be appreciated.

    Read the article

  • How to add a writable folder to the PHP document root on linux

    - by Ron Whites
    We are building an example bash script for our PHP TestCoverage Tool use on Linux. The development environment is Ubuntu 12.04_1 but we intend to have the linux example work across as many linux versions as possible without modification. The example linux script requires a variable be set to the PHP Document Root path and by default uses a small PHP example source to show the user how our GUI and text report shows the covered and uncovered PHP code areas. The linux script is also intended to be easily alterable by the user to automate the TestCoverage display of users PHP code. The problem we are having with Ubuntu 12.04 (any linux?) is that the PHP Apache2 document root is defined in /etc/apache2/sites-available/default as /var/www and /var/www is defaulted with "drwxr-xr-x" read only access. So in order to add our own folder as /var/www/SDTestCoverage we must change /var/www to "drwxrwxrwx" read-write access. So it seems our script (at least on Ubuntu) will need to ... 1. acquire and save the /var/www permissions then do .. 2. sudo chmod 777 /var/www (to make writable) 3. mkdir -p /var/www/SDTestCoverage (create our folder under the document root) 4. sudo chmod 777 /var/www/SDTestCoverage (make our subfolder writable) 5. and finally restore /var/www permissions Thanks and our Questions are .. 1. Is this the standard way (using Ubuntu) one adds a writable folder under the PHP Document Root? 2. Is this the most general purpose way one adds a writable folder under the PHP Document Root on other versions of Linux?

    Read the article

  • Export files to remote server using TortoiseSVN

    - by Matt
    I'm using TortoiseSVN to keep revisions of my code. When I commit changes, I take note of what files have changed and upload them to my server using FTP. Here's my workflow: Edit files on local computer (eg. files in C:\Users\Me\web) Commit changes to local repository using rightclick- TortoiseSVN- SVN Commit. Take the files, open FileZilla (FTP client) and upload the files to a remote server. I was wondering if there was a way in which I could omit step 3 from my workflow. Basically I would like the changed files to be automatically uploaded to the remote server when I commit a version to the repository. Information about my computer environment: Windows 7 Ultimate x64 with TortoiseSVN x64 Notepad++ text editor Files edited are PHP, CSS, JS, HTML, etc. Server is running Linux with PHP 5.2 and MySQL. FileZilla is used to upload files. I can connect to the server via SSH if that is needed. Thank you in advance.

    Read the article

< Previous Page | 759 760 761 762 763 764 765 766 767 768 769 770  | Next Page >