Search Results

Search found 33834 results on 1354 pages for 'site column'.

Page 584/1354 | < Previous Page | 580 581 582 583 584 585 586 587 588 589 590 591  | Next Page >

  • Server side url scanner for malware, spyware , viruses and protect my visitors

    - by Vangel
    I have a forum/groups site that contains a lot of external URLs, sometimes direct download links. I want to protect my visitors from possible attacks from malware sites as they are mot likely to click on these links. CUrrently I implement DBL (spamhaus) but thats not enough. I want to run a background task to check the outgoing links first. I have looked at similar questions in StackOverflow (wrongly posted there) and here but fail to find a question same as mine or a good answer. People have suggested ClamAV , I don't believe it can detect Web hosted malware sites and its has a lot of missed detection. I have looked at google safe browsing service ( http://code.google.com/apis/safebrowsing/developers_guide_v2.html very complicated to implement or maintain plus midway I get lost :S ) I can go for commercial solution, anything to protect the visitors and my site brand. But I would like to hear the opinion of server admins and if anyone has implemented such a service. My Server is basic CentOS LAMP stack. thank you very much in advance.

    Read the article

  • Can't Connect SQL server - process being used by another process. Conflict with IIS?

    - by shinya
    I'm having problem connecting to MS SQL Server (2012 Express) after accessing a database through IIS (web site). I can access the data through web site no problem, but I can't access the data from any other programs (i.e SSMS) until I reboot the SQL server. It seems that the connection stays open even if I close a browser. Here is error message I'm getting Unable to open the physical file "C:---------". Operating system error 32: "32(The process cannot access the file because it is being used by another process.)". Unable to open the physical file "C:-------". Operating system error 32: "32(The process cannot access the file because it is being used by another process.)". Cannot open user default database. Login failed. Login failed for user 'Myserver\myname'. (.Net SqlClient Data Provider) Server Name: MYPC\SQLEXPRESS Error Number: 5120 Severity: 16 State: 101 Line Number: 65536 I follow the help link and it told me to move TCP before named pipes in the protocol order list. I tried it but it didn't help at all. What is the proper settings on SQL server or IIS in order to release process after closing a browser. How do I avoid getting this error? Thank you for your help

    Read the article

  • What does this error mean in my IIS7 Failed Request Tracing report?

    - by Pure.Krome
    when I attempt to goto any page in my web application (i'm migrating the code from an asp.net web site to web application, and now testing it) .. i keep getting some not authenticated error(s) . So, i've turned on FREB and this is what it says... I'm not sure what that means? Secondly, i've also made sure that my site (or at least the default document which has been setup to be default.aspx) has anonymous on and the rest off. Proof: - C:\Windows\System32\inetsrv>appcmd list config "My Web App/default.aspx" -section:anonymousAuthentication <system.webServer> <security> <authentication> <anonymousAuthentication enabled="true" userName="IUSR" /> </authentication> </security> </system.webServer> C:\Windows\System32\inetsrv>appcmd list config "My Web App" -section:anonymousAuthentication <system.webServer> <security> <authentication> <anonymousAuthentication enabled="true" userName="IUSR" /> </authentication> </security> </system.webServer> Can someone please help?

    Read the article

  • Color drop down in Excel cell (with no text)? e.g. bgcolor = Red-Green-Amber-unknown

    - by adolf garlic
    I have an Excel sheet that I'm using to keep track of the status of certain things. I want to have a column which consists of cells containing a repeated drop down that allows you to select (as background) red amber green unknown I don't want any text in this cell, I just want a coloured block. Is this possible? I've tried playing around with data-validation-list (based on range containing all of said colours but to no avail)

    Read the article

  • Ways to parse NCSA combined based log files

    - by Kyle
    I've done a bit of site: searching with Google on Server Fault, Super User and Stack Overflow. I also checked non site specific results and and didn't really see a question like this, so here goes... I did spot this question, related to grep and awk which has some great knowledge but I don't feel the text qualification challenge was addressed. This question also broadens the scope to any platform and any program. I've got squid or apache logs based on the NCSA combined format. When I say based, meaning the first n col's in the file are per NCSA combined standards, there might be more col's with custom stuff. Here is an example line from a squid combined log: 1.1.1.1 - - [11/Dec/2010:03:41:46 -0500] "GET http://yourdomain.com:8080/en/some-page.html HTTP/1.1" 200 2142 "-" "Mozilla/5.0 (Windows; U; Windows NT 6.1; C) AppleWebKit/532.4 (KHTML, like Gecko)" TCP_MEM_HIT:NONE I'd like to be able to parse n logs and output specific columns, for sorting, counting, finding unique values etc The main challenge and what makes it a little tricky and also why I feel this question hasn't yet been asked or answered, is the text qualification conundrum. When I spotted asql from the grep/awk question, I was very excited but then realised that it didn't support combined out of the box, something I'll look at extending I guess. Looking forward to answers, and learning new stuff! Answers doesn't have to be limited to platform or program/language. For the context of this question, the platforms I use the most are Linux or OSX. Cheers

    Read the article

  • NGINX AEGIR DRUPAL permissions 403 forbidden

    - by nlam
    New to nginx Installed on mac os for use with aegir & drupal It's running great, but I have a problem with permissions My hostmaster installation is here : /var/aegir/hostmaster-6.x-1.7/ The hostmaster settings file here : /var/aegir/hostmaster-6.x-1.7/sites/aegir.ldev/settings.php Permissions for settings.php are set to 440 automatically by hostmaster, but I'm getting a 403 forbidden page because of this. If I give read permission to "other" the site works great (444 or even 004). Drupal is also telling me that the file system paths are not writable (sites/aegir.ldev/files & sites/aegir.ldev/private). I would have to change the permissions there too. Moreover, I would also have to change permissions for every site installed by hostmaster. Anyway. In my nginx.conf I have the following : user "myuser" _www; Owner and group for settings.php, /sites/example.ldev/files, /sites/example.ldev/private are "myuser" and "_www". Changing permissions to 004 solves this problem, but really confuses me. Why do "other" have permission and not owner or group? I've checked the processes running in activity monitor. Nginx is running as "myuser". Except for one process running as root. So I'm stumped. Hope someone can help.

    Read the article

  • Using Truecrypt to secure mySQL database, any pitfalls?

    - by Saul
    The objective is to secure my database data from server theft, i.e. the server is at a business office location with normal premises lock and burglar alarm, but because the data is personal healthcare data I want to ensure that if the server was stolen the data would be unavailable as encrypted. I'm exploring installing mySQL on a mounted Truecrypt encrypted volume. It all works fine, and when I power off, or just cruelly pull the plug the encrypted drive disappears. This seems a load easier than encrypting data to the database, and I understand that if there is a security hole in the web app , or a user gets physical access to a plugged in server the data is compromised, but as a sanity check , is there any good reason not to do this? @James I'm thinking in a theft scenario, its not going to be powered down nicely and so is likely to crash any DB transactions running. But then if someone steals the server I'm going to need to rely on my off site backup anyway. @tomjedrz, its kind of all sensitive, individual personal and address details linked to medical referrals/records. Would be as bad in our field as losing credit card data, but means that almost everything in the database would need encryption... so figured better to run the whole DB in an encrypted partition. If encrypt data in the tables there's got to be a key somewhere on the server I'm presuming, which seems more of a risk if the box walks. At the moment the app is configured to drop a dump of data (weekly full and then deltas only hourly using rdiff) into a directory also on the Truecrypt disk. I have an off site box running WS_FTP Pro scheduled to connect by FTPs and synch down the backup, again into a Truecrypt mounted partition.

    Read the article

  • Cant configure DNS properly on centos

    - by Nuker
    I am on a VPS i must manage my own. I have network problems because in the last days many of my users report they cant enter my site from my domain and seems like Google and Facebook cant either (this never happened before). However i can enter my site without problems and so many other people as well. So i tested by making a php include like this <?php include 'http://mysite.com/somepage.php'; ?> and i get this error: Warning: include(): php_network_getaddresses: getaddrinfo failed: Name or service not known in I even tried by including content from yahoo.com or facebook and didnt work either. However the includes will work if i use IPs instead of domains. Do i have a DNS problem or something? What can i do to fix it? Im on a Linux 2.6.32-431.11.2.el6.x86_64 on x86_64 CentOS Linux 6.5 I have this on my resolv.conf # Generated by NetworkManager # No nameservers found; try putting DNS servers into your # ifcfg files in /etc/sysconfig/network-scripts like so: # # DNS1=xxx.xxx.xxx.xxx # DNS2=xxx.xxx.xxx.xxx # DOMAIN=lab.foo.com bar.foo.com nameserver 8.8.8.8 nameserver 8.8.4.4 Thank you.

    Read the article

  • Client cannot access my IIS7 web server

    - by Soccerwiz
    I have a Windows 2008 web server on running IIS 7 with about 25 websites. One of those sites is an SaaS application that is accessed constantly throughout the day. However, one particular client keeps getting blocked from my server. They will be using the service, and then all of a sudden they cannot access the program, or any other site on the server. The entire office of 4 users is blocked from accessing anything on the web server. A trace route reveals they get all the way to the server before they are blocked. However, they can access a linux server that is a different VM with a different IP on the same physical server. Also, when they are blocked from their office, they can still access the site from their mobile phone or local Starbucks. They can also occasionally reset the router and gain access to the web server again as they are on a dynamic IP address. I checked IIS and allow all IPs to access the server. There is nothing in the logs the says anything about a user being banned. I really have no idea what is causing this? Could it be a virus on their end? I have even moved the SaaS to a completely new server in a different location, and they were working fine for about a month, and then the problem started occurring again. Are there any hidden blacklists in IIS? Or is it a routing issue on their end?

    Read the article

  • What is the solution to enable Dymo Turbo 400 Label Printer to work on Win 7 / 64-bit?

    - by mdpc
    It's Christmas time and time for printing labels for all those Christmas cards. I've upgraded to Windows 7 64-bit from XP. I've been unsuccessfully attempting to get the connected Dymo 400 Turbo label USB printer to work again. The latest manufacturer drivers have been successfully loaded and installed. The drivers are supposed to work on Windows 7/64-bit. The Win 7 system(s) in question are patched and up-to-date on that score. The Windows Update site responds with a driver when the USB cable is connected to this printer. The printer queue seems to be established correctly. What happens is that I submit a job to the printer (either using the DYMO s/w or not), it delays for a period of time, and then I get the message 'printing error'. Can't seem to locate the appropriate error in the new and improved event log. Several combinations of rebooting, re-installation and power cycling components fail to make the printer work. Sometimes during some type of reset it spits out the last thing to be submitted, but that seems intermittent. I have tried different USB cables and different USB (2.X) ports as well. I have run the Windows 7 troubleshooter it tries to fix the problem but alas it doesn't. Interestingly, trying the USB printer (and its associated manufacturer drivers and s/w) on another Windows 7 64-bit system has the identical failures noticed on the original system. I did not find anything on the manufacturers' site concerning this problem. The printer has no hardware problems or issues.

    Read the article

  • How to configure my 404 response

    - by Evylent
    How would I be able to correctly redirect a person who visits my site to my 404 page? I have already created my 404.php file as: <!DOCTYPE html> <html><head><meta http-equiv="Content-Type" content="text/html; charset=UTF-8"> <title>Page not found | Twilight of Spirits</title> <link rel="stylesheet" href="http://forum.umbradora.net/template/default/css/404.css"> <link rel="icon" type="image/x-icon" href="/favicon.png"> </head> <body> <div id="error"> <a href="http://forum.umbradora.net/"> <img src="/forum/template/default/images/layout/404.png" alt="404 page not found" id="error404-image"> </a> </div> <div id="mixpanel" style="visibility: hidden; "></div></body></html> My .htaccess file is: ErrorDocument 404 http://forum.umbradora.net/404.php Now when I go to my site and enter a false link such as mack.php or total.html, I get this error: Internal Server Error The server encountered an internal error or misconfiguration and was unable to complete your request. Please contact the server administrator, [email protected] and inform them of the time the error occurred, and anything you might have done that may have caused the error. More information about this error may be available in the server error log. Additionally, a 404 Not Found error was encountered while trying to use an ErrorDocument to handle the request. Any ideas on how to solve this? I have tried switching from subdomain to my normal path, still get errors.

    Read the article

  • 100% uptime for a web application

    - by Chris Lively
    We received an interesting "requirement" from a client today. They want 100% uptime with off-site failover on a web application. From our web application's viewpoint, this isn't an issue. It was designed to be able to scale out across multiple database servers, etc. However, from a networking issue I just can't seem to figure out how to make it work. In a nutshell, the application will live on servers within the client's network. It is accessed by both internal and external people. They want us to maintain an off-site copy of the system that in the event of a serious failure at their premises would immediately pick up and take over. Now we know there is absolutely no way to resolve it for internal people (carrier pigeon?), but they want the external users to not even notice. Quite frankly, I haven't the foggiest idea of how this might be possible. It seems that if they lose Internet connectivity then we would have to do a DNS change to forward traffic to the external machines... Which, of course, takes time. Ideas? UPDATE I had a discussion with the client today and they clarified on the issue. They stuck by the 100% number, saying the application should stay active even in the event of a flood. However, that requirement only kicks in if we host it for them. They said they would handle the uptime requirement if the application lives entirely on their servers. You can guess my response.

    Read the article

  • Pasting to Excel from Word - stop a Word new line being converted into a new cell

    - by Sean McRaghty
    So I have a table in MS Word which has two columns. In the second column the text is spread on multiple lines, ie I have pressed 'Enter' to achieve this. When I paste into Excel, it converts these separate lines into separate cells. What I want it to do is to keep the lines in the same cell, just on different lines, ie what would happen if I were to press Alt+Enter in a cell in excel. How would I go about this?

    Read the article

  • nginx short urls for mediawiki

    - by William
    I am trying to do short URLs for a MediaWiki site. The wiki is in a subdirectory mydir (http://www.example.com/mywiki). I've already set up rewrites in /etc/nginx/sites-available so that example.com redirects to example.com/mywiki. Currently the URL is like http://www.example.com/mywiki/index.php?title=Main_Page. I want to clean up the url so that it looks like http://www.example.com/mywiki/Main_Page. I am having quite a bit of trouble doing this. I am not familiar with regular expressions or the syntax that the nginx config files use. This is what I currently have: server_name example.com www.example.com; location / { rewrite ^.+ /mywiki/ permanent; } location /wiki/ { rewrite ^/mywiki/([^?]*)(?:\?(.*))? /mywiki/index.php?title=$1&$2 last; } The second rewrite is obviously the one that's broken. It is based off of Page title -- nginx rewrite--root access in the MediaWiki documentation. When I try to load the site, the browser tells me I get infinite redirects. Does anyone who how I should go about fixing this issue? Or rather, what is the correct way to implement this, and what do all those symbols mean?

    Read the article

  • Networking Mac OS X 10.6 and Vista via ethernet?

    - by Moshe
    How can I make Vista home premium access OS X hard drive? and the other way around? I'd like to transfer files via direct ethernet. Plugging in an ethernet cable makes both computers recognize a network, but not the other device. Each firewall is turned off, but no luck. Edit: I don't see Windows Sharing in the Service Column.

    Read the article

  • Separate Certificate by Subdomain (With multiple IPs)

    - by Brian
    Note: Yes, I realize this problem is easier to solve by just using 1 multi-domain or wildcard certificate. I wish to have an ASP.NET site running on IIS with 2 SSL domains sharing 1 web application but using separate certificates. Assuming I have 2 certificates, this can be solved on IIS7 as follows: Web Application1: Binding 1: http, 80, IP Address *, Host Name * Binding 2: https, 443, IPADDRESS1, using CERTDOMAIN1 (DOMAIN1 resolves to IPADDRESS1) Binding 3: https, 443, IPADDRESS2, using CERTDOMAIN2 (DOMAIN2 resolves to IPADDRESS2) That is to say, 2 certificates and 2 ip addresses, but both mapped to the same web application. In IIS6, the closest I have been able to come to this configuration is: Web Application1: Binding 1: http, 80, IPADDRESS1 Binding 2: https, 443, IPADDRESS1, using CERTDOMAIN1 (DOMAIN1 resolves to IPADDRESS1) Web Application2: Binding 1: http, 80, IPADDRESS2 Binding 2: https, 443, IPADDRESS2, using CERTDOMAIN2 (DOMAIN2 resolves to IPADDRESS2) That is to say, 2 certificates and 2 IP addresses, 2 web applications, both mapped to the same file location. The IIS6 solution is not optimal. Even if sharing an application pool, there are still costs associated with running the same site as two applications. Is upgrading from IIS6 to IIS7 a legitimate way to resolve this problem? Is there an IIS6 way to map 2 IP addresses within the same web application to different certificates?

    Read the article

  • Safe to remove Python2.6 files?

    - by darkfeline
    I'm using Linux Mint 11 (will upgrade soon), and I've noticed that, even though I don't have any python2.6 packages installed with apt, there's a bunch of residual python2.6 files scattered around my drive, including, but not limited to, dist-packages in /usr/lib/python2.6 and various /usr/share stuff. Is there any way to test if these files are still being used? I'm tempted to sudo rm -rf the lot of them, but I'm scared it'll break stuff. Also, does anyone have any idea where these files could have come from? I believe I had python2.6 installed once upon a time, but I made sure to --purge them, so there shouldn't be any trace of them left, right? EDIT: after using a quick script to check all of the files, it appears most of them belong to important packages, so I won't try weeding out the few which I know are probably useless. Although I am curious why so many packages have python2.6 files when I don't even have it installed. These files are not associated with any packages and I'm not sure if they are safe to remove: /usr/bin/ipython2.6 /usr/lib/python2.6/dist-packages/distribute-0.6.15.egg-info /usr/lib/python2.6/dist-packages/easy_install.py /usr/lib/python2.6/dist-packages/IPython /usr/lib/python2.6/dist-packages/ipython-0.10.1.egg-info /usr/lib/python2.6/dist-packages/setuptools /usr/lib/python2.6/dist-packages/setuptools.egg-info /usr/lib/python2.6/dist-packages/setuptools.pth /usr/lib/python2.6/dist-packages/site.py /usr/lib/python2.6/dist-packages/wx.pth /usr/local/lib/python2.6 /usr/local/lib/python2.6/dist-packages /usr/local/lib/python2.6/site-packages /usr/share/man/man1/ipython2.6.1.gz

    Read the article

  • How can I optimize my ajax calls to deliver at 60ms.

    - by Quintin Par
    I am building an autocomplete functionality for my site and the Google instant results are my benchmark. When I look at Google, the 50-60 ms response time baffle me. They look insane. In comparison here’s how mine looks like. To give you an idea my results are cached on the load balancer and served from a machine that has httpd slowstart and initcwnd fixed. My site is also behind cloudflare From a server side perspective I don’t think I can do anything more. Can someone help me take this 500 ms response time to 60ms? What more should I be doing to achieve Google level performance? Edit: People, you seemed to be angry that I did a comparison to Google and the question is very generic. Sorry about that. To rephrase: How can I bring down response time from 500 ms to 60 ms provided my server response time is just a fraction of ms. Assume the results are served from Nginx - Varnish with a cache hit. Here are some answers I would like to answer myself assume the response sizes remained more or less the same. Ensure results are http compressed Ensure SPDY if you are on https Ensure you have initcwnd set to 10 and disable slow start on linux machines. Etc. I don’t think I’ll end up with 60 ms at Google level but your collective expertise can help easily shave off a 100 ms and that’s a big win.

    Read the article

  • Migrating ODBC information through a batch file

    - by DeskSide
    I am a desktop support technician currently working on a large scale migration project across multiple sites. I am looking at a way to transfer ODBC entries from Windows XP to Windows 7. If anyone knows of a program or anything prebuilt that already does this, please redirect me. I've already looked but haven't found anything, so I'm trying to build my own. I know enough basic programming to read the work of others and monkey around with something that already exists, but not much else. I have come across a custom batch file written at one site that (among other things) exports ODBC information from the old computer and stores it on a server (labelled as y: through net use at the beginning of the file), then later transfers it from the server to a new computer. The pre-existing code is for Windows XP to XP migrations. Here are the pertinate bits of code: echo Exporting ODBC Information start /wait regedit.exe /e "y:\%username%\odbc.reg" HKEY_LOCAL_MACHINE\SOFTWARE\ODBC\ODBC.INI (and later on) echo Importing ODBC start /wait regedit /s "y:\%username%\odbc.reg" We are now migrating from Windows XP to 7, and this part of the batch file still seems to work for this particular site, where Oracle 8i and 10g are used. I'm looking to use my cut down version of this code at multiple sites, and I'm wondering if the same lines of code will still work for anything other than Oracle. Also, my research on this issue has shown that there are different locations in 64 bit operating systems for 32/64 bit entries, and I'm wondering what effect that would have on the code. Could I copy the same data to both parts of the registry, in hopes of catching everything? Any assistance would be appreciated. Thank you for your time.

    Read the article

  • SRM 4 Test Fails with message for some VM : Error: A specified parameter was not correct.

    - by Setesh
    Here are my architecture : For the protected site 4 Host VSphere Enterprise Plus, each one with 2 HBAs FC connected to the switch fabric, connected to an EMC CX4-120 1 VCenter 1 SRM For the recovery site 2 Hosts Vsphere 4 1 Vcenter 1 SRM 1 CX-4-120 The CX4-120 is connected to the second CX4-120 with ISCSI and the MirrorView / Asynchronous. I synchronise for the time 6 Lun on a FC DAE, 2 on a S-ATA DAE I have allocated 30% of the amount synchronised LUN for the SNAPSHOT us, but I have allocated them only on my S-ATA II DAE. It does not make a problem, my snapshot are correctly active. All the installation is new (hardware and software), installed in January with the last files available in download. I have a strange problem, and it's random, sometimes when I run a test on my RP, some VMs have this error : Error: A specified parameter was not correct. I don't know where to look. Any help is appreciated... PS : I have checked on all the VMs, no Floppy disk or CD attached. PS2 : There is severals VMs with RDM and OCFS2 filesystems on it.

    Read the article

  • Apache RewriteEngine, redirect sub-directory to another script

    - by Niklas R
    I've been trying to achieve this since about 1.5 hours now. I want to have the following transformations when requesting sites on my website: homepage.com/ => index.php homepage.com/archive => index.php?archive homepage.com/archive/site-01 => index.php?archive/site-01 homepage.com/files/css/main.css => requestfile.php?css/main.css The first three transformations can be done by using the following: RewriteEngine on RewriteRule ^/?$ index.php RewriteRule ^/?(.*)$ index.php?$1 However, I'm stuck at the point where all requests to the files subdirectory should be redirected to requestfile.php. This is one of the tries I've done: RewriteEngine on RewriteRule ^/?$ index.php RewriteRule ^/?files/(.+)$ requestfile.php?$1 RewriteRule ^/?(.*)$ index.php?$1 But that does not work. I've also tried to put [L] after the third line, but that didn't help as I'm using this configuration in .htaccess and sub-requests will transform that URL again, etc. I fuzzed with the RewriteCond command but I couldn't get it to work. How needs the configuration to look like to achieve what I desire?

    Read the article

  • IIS 6 Denies access to the default document

    - by Jim
    I've got Windows Server 2k3 with IIS6 hosting a couple ASP.NET MVC 2 applications (.NET 4), all in the Default Web Site. Most of them simply use Integrated authentication, but a couple use forms as well. All the applications work properly and are correctly accessible. The problem I'm trying to resolve is access to the default document. It is currently specified as index.htm. Both index.htm and the Default Web Site are configured to allow anonymous access (with none of the authenticated acces boxes checked). However, access is denied to the file. Accessing via server.domain.tld/ and server.domain.tld/index.htm both yield 401 errors. However, server.domain.tld/default.htm (file does not exist) properly returns a 404. If I alter the file security on index.htm to allow integrated authentication, then requesting /index.htm directly works properly for users with domain accounts, but anonymous users get a login prompt/401. How can I configure IIS to allow all users to view index.htm via server.domain.tld/?

    Read the article

  • How to setup bindings for development IIS 7.5 with lot of sites

    - by Antonio Bakula
    I am a programmer in a small ASP.NET shop with very little expirience in server administration, and I have to setup IIS 7.5 to host lot of sites on newly installed windows server 2008 R2, these sites are test "clones" for sites on "real" web server and they should be accessible only in local network (domain). Developers should add new sites for our new customers. Project managers use this server to check progress and test new sites and new features, QA people have to have access to this site and test before we copy it to the "real" web server. Developers only have access to IIS console, in fact they can use RDP to test server with their developer domain credentials and permissions, also developers are local admins on that machine (tester). On our previous server I used different port numbers for each site. That worked but don't like this solution, I would prefer to use subdomains. But here are the problems: manually adding DNS records is not an option because we do not wont that developers have to administer domain DNS server, and currently this had to be done with domain administrator credentials. Is there a some way to add DNS record automatically ? I tried to add DNS record for subdomains on test server with wildcard (*.tester) and that seems to work for some time but that change coused some bad problems in our domain network and admin forbid me to mess with DNS, he said that I have to add DNS record for every subdomain manually and that I can not use wildcards, and there is nothing that I can do about it, mainly for "politicall" reasons :( obviously our admin is pretty much uncooperative, outsorced from different organization and I can't do anything about that. can I add another DNS server on that machine ? What must be setup on clients machines to "tell" them to use domain DNS server and tester domain server ? So please I need someone to give me some advice, what should I do ? Is different port numbers only option left ? Thanks !

    Read the article

< Previous Page | 580 581 582 583 584 585 586 587 588 589 590 591  | Next Page >