Search Results

Search found 113039 results on 4522 pages for 'database sql server'.

Page 264/4522 | < Previous Page | 260 261 262 263 264 265 266 267 268 269 270 271  | Next Page >

  • Can not connect to my sql database.

    - by madhup
    Hi all, I am trying to connect to mysql database on amazon through a php script, but I am shown this error: Warning: mysql_connect() [function.mysql-connect]: Lost connection to MySQL server at 'reading initial communication packet', system error: 111 I have tried and searched places and did the following things: In "/etc/mysql/my.cnf" I commented out the line bind address: 127.0.0.1 to allow the acccess to all. checked /etc/hosts.allow and /etc/hosts.deny and made sure that there are no rules present that may cause But still no luck. Please suggest any other way. Thanks, Madhup

    Read the article

  • Why isn't the backup file created when running sqlcmd from remote machine?

    - by Ed Gl
    I tried running the sqlcmd from a remote host to do a simple backup of a sql 2008 database. The command goes something like this: sqlcmd -s xxx.xxx.xxx.xx -U username -P some_password -Q "Backup database [db] to \ disk = 'c:\test_backup.bak' with format" I get a succesfull message but the file isn't created. When I run this on the sql manager on the same machine, it works. I thought it was permission problems, but I'm using the same username in both cases. Any thoughts?

    Read the article

  • Guide to reduce TFS database growth using the Test Attachment Cleaner

    - by terje
    Recently there has been several reports on TFS databases growing too fast and growing too big.  Notable this has been observed when one has started to use more features of the Testing system.  Also, the TFS 2010 handles test results differently from TFS 2008, and this leads to more data stored in the TFS databases. As a consequence of this there has been released some tools to remove unneeded data in the database, and also some fixes to correct for bugs which has been found and corrected during this process.  Further some preventive practices and maintenance rules should be adopted. A lot of people have blogged about this, among these are: Anu’s very important blog post here describes both the problem and solutions to handle it.  She describes both the Test Attachment Cleaner tool, and also some QFE/CU releases to fix some underlying bugs which prevented the tool from being fully effective. Brian Harry’s blog post here describes the problem too This forum thread describes the problem with some solution hints. Ravi Shanker’s blog post here describes best practices on solving this (TBP) Grant Holidays blogpost here describes strategies to use the Test Attachment Cleaner both to detect space problems and how to rectify them.   The problem can be divided into the following areas: Publishing of test results from builds Publishing of manual test results and their attachments in particular Publishing of deployment binaries for use during a test run Bugs in SQL server preventing total cleanup of data (All the published data above is published into the TFS database as attachments.) The test results will include all data being collected during the run.  Some of this data can grow rather large, like IntelliTrace logs and video recordings.   Also the pushing of binaries which happen for automated test runs, including tests run during a build using code coverage which will include all the files in the deployment folder, contributes a lot to the size of the attached data.   In order to handle this systematically, I have set up a 3-stage process: Find out if you have a database space issue Set up your TFS server to minimize potential database issues If you have the “problem”, clean up the database and otherwise keep it clean   Analyze the data Are your database( s) growing ?  Are unused test results growing out of proportion ? To find out about this you need to query your TFS database for some of the information, and use the Test Attachment Cleaner (TAC) to obtain some  more detailed information. If you don’t have too many databases you can use the SQL Server reports from within the Management Studio to analyze the database and table sizes. Or, you can use a set of queries . I find queries often faster to use because I can tweak them the way I want them.  But be aware that these queries are non-documented and non-supported and may change when the product team wants to change them. If you have multiple Project Collections, find out which might have problems: (Disclaimer: The queries below work on TFS 2010. They will not work on Dev-11, since the table structure have been changed.  I will try to update them for Dev-11 when it is released.) Open a SQL Management Studio session onto the SQL Server where you have your TFS Databases. Use the query below to find the Project Collection databases and their sizes, in descending size order.  use master select DB_NAME(database_id) AS DBName, (size/128) SizeInMB FROM sys.master_files where type=0 and substring(db_name(database_id),1,4)='Tfs_' and DB_NAME(database_id)<>'Tfs_Configuration' order by size desc Doing this on one of our SQL servers gives the following results: It is pretty easy to see on which collection to start the work   Find out which tables are possibly too large Keep a special watch out for the Tfs_Attachment table. Use the script at the bottom of Grant’s blog to find the table sizes in descending size order. In our case we got this result: From Grant’s blog we learnt that the tbl_Content is in the Version Control category, so the major only big issue we have here is the tbl_AttachmentContent.   Find out which team projects have possibly too large attachments In order to use the TAC to find and eventually delete attachment data we need to find out which team projects have these attachments. The team project is a required parameter to the TAC. Use the following query to find this, replace the collection database name with whatever applies in your case:   use Tfs_DefaultCollection select p.projectname, sum(a.compressedlength)/1024/1024 as sizeInMB from dbo.tbl_Attachment as a inner join tbl_testrun as tr on a.testrunid=tr.testrunid inner join tbl_project as p on p.projectid=tr.projectid group by p.projectname order by sum(a.compressedlength) desc In our case we got this result (had to remove some names), out of more than 100 team projects accumulated over quite some years: As can be seen here it is pretty obvious the “Byggtjeneste – Projects” are the main team project to take care of, with the ones on lines 2-4 as the next ones.  Check which attachment types takes up the most space It can be nice to know which attachment types takes up the space, so run the following query: use Tfs_DefaultCollection select a.attachmenttype, sum(a.compressedlength)/1024/1024 as sizeInMB from dbo.tbl_Attachment as a inner join tbl_testrun as tr on a.testrunid=tr.testrunid inner join tbl_project as p on p.projectid=tr.projectid group by a.attachmenttype order by sum(a.compressedlength) desc We then got this result: From this it is pretty obvious that the problem here is the binary files, as also mentioned in Anu’s blog. Check which file types, by their extension, takes up the most space Run the following query use Tfs_DefaultCollection select SUBSTRING(filename,len(filename)-CHARINDEX('.',REVERSE(filename))+2,999)as Extension, sum(compressedlength)/1024 as SizeInKB from tbl_Attachment group by SUBSTRING(filename,len(filename)-CHARINDEX('.',REVERSE(filename))+2,999) order by sum(compressedlength) desc This gives a result like this:   Now you should have collected enough information to tell you what to do – if you got to do something, and some of the information you need in order to set up your TAC settings file, both for a cleanup and for scheduled maintenance later.    Get your TFS server and environment properly set up Even if you have got the problem or if have yet not got the problem, you should ensure the TFS server is set up so that the risk of getting into this problem is minimized.  To ensure this you should install the following set of updates and components. The assumption is that your TFS Server is at SP1 level. Install the QFE for KB2608743 – which also contains detailed instructions on its use, download from here. The QFE changes the default settings to not upload deployed binaries, which are used in automated test runs. Binaries will still be uploaded if: Code coverage is enabled in the test settings. You change the UploadDeploymentItem to true in the testsettings file. Be aware that this might be reset back to false by another user which haven't installed this QFE. The hotfix should be installed to The build servers (the build agents) The machine hosting the Test Controller Local development computers (Visual Studio) Local test computers (MTM) It is not required to install it to the TFS Server, test agents or the build controller – it has no effect on these programs. If you use the SQL Server 2008 R2 you should also install the CU 10 (or later).  This CU fixes a potential problem of hanging “ghost” files.  This seems to happen only in certain trigger situations, but to ensure it doesn’t bite you, it is better to make sure this CU is installed. There is no such CU for SQL Server 2008 pre-R2 Work around:  If you suspect hanging ghost files, they can be – with some mental effort, deduced from the ghost counters using the following SQL query: use master SELECT DB_NAME(database_id) as 'database',OBJECT_NAME(object_id) as 'objectname', index_type_desc,ghost_record_count,version_ghost_record_count,record_count,avg_record_size_in_bytes FROM sys.dm_db_index_physical_stats (DB_ID(N'<DatabaseName>'), OBJECT_ID(N'<TableName>'), NULL, NULL , 'DETAILED') The problem is a stalled ghost cleanup process.  Restarting the SQL server after having stopped all components that depends on it, like the TFS Server and SPS services – that is all applications that connect to the SQL server. Then restart the SQL server, and finally start up all dependent processes again.  (I would guess a complete server reboot would do the trick too.) After this the ghost cleanup process will run properly again. The fix will come in the next CU cycle for SQL Server R2 SP1.  The R2 pre-SP1 and R2 SP1 have separate maintenance cycles, and are maintained individually. Each have its own set of CU’s. When it comes I will add the link here to that CU. The "hanging ghost file” issue came up after one have run the TAC, and deleted enourmes amount of data.  The SQL Server can get into this hanging state (without the QFE) in certain cases due to this. And of course, install and set up the Test Attachment Cleaner command line power tool.  This should be done following some guidelines from Ravi Shanker: “When you run TAC, ensure that you are deleting small chunks of data at regular intervals (say run TAC every night at 3AM to delete data that is between age 730 to 731 days) – this will ensure that small amounts of data are being deleted and SQL ghosted record cleanup can catch up with the number of deletes performed. “ This rule minimizes the risk of the ghosted hang problem to occur, and further makes it easier for the SQL server ghosting process to work smoothly. “Run DBCC SHRINKDB post the ghosted records are cleaned up to physically reclaim the space on the file system” This is the last step in a 3 step process of removing SQL server data. First they are logically deleted. Then they are cleaned out by the ghosting process, and finally removed using the shrinkdb command. Cleaning out the attachments The TAC is run from the command line using a set of parameters and controlled by a settingsfile.  The parameters point out a server uri including the team project collection and also point at a specific team project. So in order to run this for multiple team projects regularly one has to set up a script to run the TAC multiple times, once for each team project.  When you install the TAC there is a very useful readme file in the same directory. When the deployment binaries are published to the TFS server, ALL items are published up from the deployment folder. That often means much more files than you would assume are necessary. This is a brute force technique. It works, but you need to take care when cleaning up. Grant has shown how their settings file looks in his blog post, removing all attachments older than 180 days , as long as there are no active workitems connected to them. This setting can be useful to clean out all items, both in a clean-up once operation, and in a general There are two scenarios we need to consider: Cleaning up an existing overgrown database Maintaining a server to avoid an overgrown database using scheduled TAC   1. Cleaning up a database which has grown too big due to these attachments. This job is a “Once” job.  We do this once and then move on to make sure it won’t happen again, by taking the actions in 2) below.  In this scenario you should only consider the large files. Your goal should be to simply reduce the size, and don’t bother about  the smaller stuff. That can be left a scheduled TAC cleanup ( 2 below). Here you can use a very general settings file, and just remove the large attachments, or you can choose to remove any old items.  Grant’s settings file is an example of the last one.  A settings file to remove only large attachments could look like this: <!-- Scenario : Remove large files --> <DeletionCriteria> <TestRun /> <Attachment> <SizeInMB GreaterThan="10" /> </Attachment> </DeletionCriteria> Or like this: If you want only to remove dll’s and pdb’s about that size, add an Extensions-section.  Without that section, all extensions will be deleted. <!-- Scenario : Remove large files of type dll's and pdb's --> <DeletionCriteria> <TestRun /> <Attachment> <SizeInMB GreaterThan="10" /> <Extensions> <Include value="dll" /> <Include value="pdb" /> </Extensions> </Attachment> </DeletionCriteria> Before you start up your scheduled maintenance, you should clear out all older items. 2. Scheduled maintenance using the TAC If you run a schedule every night, and remove old items, and also remove them in small batches.  It is important to run this often, like every night, in order to keep the number of deleted items low. That way the SQL ghost process works better. One approach could be to delete all items older than some number of days, let’s say 180 days. This could be combined with restricting it to keep attachments with active or resolved bugs.  Doing this every night ensures that only small amounts of data is deleted. <!-- Scenario : Remove old items except if they have active or resolved bugs --> <DeletionCriteria> <TestRun> <AgeInDays OlderThan="180" /> </TestRun> <Attachment /> <LinkedBugs> <Exclude state="Active" /> <Exclude state="Resolved"/> </LinkedBugs> </DeletionCriteria> In my experience there are projects which are left with active or resolved workitems, akthough no further work is done.  It can be wise to have a cleanup process with no restrictions on linked bugs at all. Note that you then have to remove the whole LinkedBugs section. A approach which could work better here is to do a two step approach, use the schedule above to with no LinkedBugs as a sweeper cleaning task taking away all data older than you could care about.  Then have another scheduled TAC task to take out more specifically attachments that you are not likely to use. This task could be much more specific, and based on your analysis clean out what you know is troublesome data. <!-- Scenario : Remove specific files early --> <DeletionCriteria> <TestRun > <AgeInDays OlderThan="30" /> </TestRun> <Attachment> <SizeInMB GreaterThan="10" /> <Extensions> <Include value="iTrace"/> <Include value="dll"/> <Include value="pdb"/> <Include value="wmv"/> </Extensions> </Attachment> <LinkedBugs> <Exclude state="Active" /> <Exclude state="Resolved" /> </LinkedBugs> </DeletionCriteria> The readme document for the TAC says that it recognizes “internal” extensions, but it does recognize any extension. To run the tool do the following command: tcmpt attachmentcleanup /collection:your_tfs_collection_url /teamproject:your_team_project /settingsfile:path_to_settingsfile /outputfile:%temp%/teamproject.tcmpt.log /mode:delete   Shrinking the database You could run a shrink database command after the TAC has run in cases where there are a lot of data being deleted.  In this case you SHOULD do it, to free up all that space.  But, after the shrink operation you should do a rebuild indexes, since the shrink operation will leave the database in a very fragmented state, which will reduce performance. Note that you need to rebuild indexes, reorganizing is not enough. For smaller amounts of data you should NOT shrink the database, since the data will be reused by the SQL server when it need to add more records.  In fact, it is regarded as a bad practice to shrink the database regularly.  So on a daily maintenance schedule you should NOT shrink the database. To shrink the database you do a DBCC SHRINKDATABASE command, and then follow up with a DBCC INDEXDEFRAG afterwards.  I find the easiest way to do this is to create a SQL Maintenance plan including the Shrink Database Task and the Rebuild Index Task and just execute it when you need to do this.

    Read the article

  • Partner Webcast – More out of ODA with DB Options - 19 July 2012

    - by Thanos
    The Simple, Reliable, Affordable Path to High-Availability Databases Critical business data needs to be available 24/7 for users and customers, but it can be a struggle to find the time and resources to build a highly available database system that’s reliable and affordable. That’s why Oracle created the new Oracle Database Appliance—a complete package of software, server, storage, and networking. The Oracle Database Appliance integrates the world’s most popular database - Oracle Database 11g  - with system software, servers, storage and networking in a single box. Business gets the benefit of a reliable, secure and highly available database to support applications and maintain continuity – as well as groundbreaking ease of use. But that is not all, with the support for all Oracle Database Options, Oracle Database Appliance can be the ideal solution for many use cases. The benefits?   Unmatched performance, reliability & security for your data that’s there when you need it – which is all the time. Fast installation, simple deployment, easy management. Out of the box. Significant cost savings & reduced risk and complexity compared to integrating all the elements yourself. Ongoing lower total cost of ownership with multiple automated support, detection & correction functions that also save you time.   Discover the Oracle Database Appliance Value Proposition and learn how to position and combine it with database options to capture new business and easily roll out solutions safely and with maximum cost efficiency. Agenda: Oracle Database& Engineered Systems Innovation. What’s the Oracle Database Appliance ? Oracle Database Appliance Value Proposition. Oracle Database Appliance with Database Options Oracle Database Appliance Partners Business Delivery Format This FREE online LIVE eSeminar will be delivered over the Web. Registrations received less than 24hours prior to start time may not receive confirmation to attend. Duration: 1 hour Register Now! For any questions please contact us at partner.imc-AT-beehiveonline.oracle-DOT-com Visit regularly our ISV Migration Center blog Or Follow us @oracleimc to learn more on Oracle Technologies as well as upcoming partner webcasts and events.

    Read the article

  • SQL University: What and why of database refactoring

    - by Mladen Prajdic
    This is a post for a great idea called SQL University started by Jorge Segarra also famously known as SqlChicken on Twitter. It’s a collection of blog posts on different database related topics contributed by several smart people all over the world. So this week is mine and we’ll be talking about database testing and refactoring. In 3 posts we’ll cover: SQLU part 1 - What and why of database testing SQLU part 2 - What and why of database refactoring SQLU part 3 - Tools of the trade This is a second part of the series and in it we’ll take a look at what database refactoring is and why do it. Why refactor a database To know why refactor we first have to know what refactoring actually is. Code refactoring is a process where we change module internals in a way that does not change that module’s input/output behavior. For successful refactoring there is one crucial thing we absolutely must have: Tests. Automated unit tests are the only guarantee we have that we haven’t broken the input/output behavior before refactoring. If you haven’t go back ad read my post on the matter. Then start writing them. Next thing you need is a code module. Those are views, UDFs and stored procedures. By having direct table access we can kiss fast and sweet refactoring good bye. One more point to have a database abstraction layer. And no, ORM’s don’t fall into that category. But also know that refactoring is NOT adding new functionality to your code. Many have fallen into this trap. Don’t be one of them and resist the lure of the dark side. And it’s a strong lure. We developers in general love to add new stuff to our code, but hate fixing our own mistakes or changing existing code for no apparent reason. To be a good refactorer one needs discipline and focus. Now we know that refactoring is all about changing inner workings of existing code. This can be due to performance optimizations, changing internal code workflows or some other reason. This is a typical black box scenario to the outside world. If we upgrade the car engine it still has to drive on the road (preferably faster) and not fly (no matter how cool that would be). Also be aware that white box tests will break when we refactor. What to refactor in a database Refactoring databases doesn’t happen that often but when it does it can include a lot of stuff. Let us look at a few common cases. Adding or removing database schema objects Adding, removing or changing table columns in any way, adding constraints, keys, etc… All of these can be counted as internal changes not visible to the data consumer. But each of these carries a potential input/output behavior change. Dropping a column can result in views not working anymore or stored procedure logic crashing. Adding a unique constraint shows duplicated data that shouldn’t exist. Foreign keys break a truncate table command executed from an application that runs once a month. All these scenarios are very real and can happen. With the proper database abstraction layer fully covered with black box tests we can make sure something like that does not happen (hopefully at all). Changing physical structures Physical structures include heaps, indexes and partitions. We can pretty much add or remove those without changing the data returned by the database. But the performance can be affected. So here we use our performance tests. We do have them, right? Just by adding a single index we can achieve orders of magnitude performance improvement. Won’t that make users happy? But what if that index causes our write operations to crawl to a stop. again we have to test this. There are a lot of things to think about and have tests for. Without tests we can’t do successful refactoring! Fixing bad code We all have some bad code in our systems. We usually refer to that code as code smell as they violate good coding practices. Examples of such code smells are SQL injection, use of SELECT *, scalar UDFs or cursors, etc… Each of those is huge code smell and can result in major code changes. Take SELECT * from example. If we remove a column from a table the client using that SELECT * statement won’t have a clue about that until it runs. Then it will gracefully crash and burn. Not to mention the widely unknown SELECT * view refresh problem that Tomas LaRock (@SQLRockstar on Twitter) and Colin Stasiuk (@BenchmarkIT on Twitter) talk about in detail. Go read about it, it’s informative. Refactoring this includes replacing the * with column names and most likely change to application using the database. Breaking apart huge stored procedures Have you ever seen seen a stored procedure that was 2000 lines long? I have. It’s not pretty. It hurts the eyes and sucks the will to live the next 10 minutes. They are a maintenance nightmare and turn into things no one dares to touch. I’m willing to bet that 100% of time they don’t have a single test on them. Large stored procedures (and functions) are a clear sign that they contain business logic. General opinion on good database coding practices says that business logic has no business in the database. That’s the applications part. Refactoring such behemoths requires writing lots of edge case tests for the stored procedure input/output behavior and then start to refactor it. First we split the logic inside into smaller parts like new stored procedures and UDFs. Those then get called from the master stored procedure. Once we’ve successfully modularized the database code it’s best to transfer that logic into the applications consuming it. This only leaves the stored procedure with common data manipulation logic. Of course this isn’t always possible so having a plethora of performance and behavior unit tests is absolutely necessary to confirm we’ve actually improved the codebase in some way.   Refactoring is not a popular chore amongst developers or managers. The former don’t like fixing old code, the latter can’t see the financial benefit. Remember how we talked about being lousy at estimating future costs in the previous post? But there comes a time when it must be done. Hopefully I’ve given you some ideas how to get started. In the last post of the series we’ll take a look at the tools to use and an example of testing and refactoring.

    Read the article

  • Database unit testing is now available for SSDT

    - by jamiet
    Good news was announced yesterday for those that are using SSDT and want to write unit tests, unit testing functionality is now available. The announcement was made on the SSDT team blog in post Available Today: SSDT—December 2012. Here are a few thoughts about this news. Firstly, there seems to be a general impression that database unit testing was not previously available for SSDT – that’s not entirely true. Database unit testing was most recently delivered in Visual Studio 2010 and any database unit tests written therein work perfectly well against SQL Server databases created using SSDT (why wouldn’t they – its just a database after all). In other words, if you’re running SSDT inside Visual Studio 2010 then you could carry on freely writing database unit tests; some of the tight integration between the two (e.g. right-click on an object in SQL Server Object Explorer and choose to create a unit test) was not there – but I’ve never found that to be a problem. I am currently working on a project that uses SSDT for database development and have been happily running VS2010 database unit tests for a few months now. All that being said, delivery of database unit testing for SSDT is now with us and that is good news, not least because we now have the ability to create unit tests in VS2012. We also get tight integration with SSDT itself, the like of which I mentioned above. Having now had a look at the new features I was delighted to find that one of my big complaints about database unit testing has been solved. As I reported here on Connect a refactor operation would cause unit test code to get completely mangled. See here the before and after from such an operation: SELECT    * FROM    bi.ProcessMessageLog pml INNER JOIN bi.[LogMessageType] lmt     ON    pml.[LogMessageTypeId] = lmt.[LogMessageTypeId] WHERE    pml.[LogMessage] = 'Ski[LogMessageTypeName]of message: IApplicationCanceled' AND        lmt.[LogMessageType] = 'Warning'; which is obviously not ideal. Thankfully that seems to have been solved with this latest release. One disappointment about this new release is that the process for running tests as part of a CI build has not changed from the horrendously complicated process required previously. Check out my blog post Setting up database unit testing as part of a Continuous Integration build process [VS2010 DB Tools - Datadude] for instructions on how to do it. In that blog post I describe it as “fiddly” – I was being kind when I said that! @Jamiet

    Read the article

  • Reassociate .SQL files with VS T-SQL Editor

    - by Scott
    I seem to have lost the association from .sql files to the default VS T-SQL editor. I'm using Visual Studio 2008. When i open a .sql file it opens using a text editor with no syntax highlighting. How do I reassociate all .sql files with the default T-SQL editor while inside Visual Studio?

    Read the article

  • Problem in SQL Server 2005

    - by megala
    I Know how to create table in SQL server 2005.But due to system problem i reinstalled SQL SErver 2005.After that I select the option like that start - programs - microsoft sql server 2005 - sql server management studio express My problem is in that sql server management studio express is not exits.How to solve the above problem Thanks in advance

    Read the article

  • How do you hook a C++ compiled dll function to a sql database?

    - by Thomas
    I want to do something like: lastName SIMILARTO(lastName, 'Schwarseneger', 2) where lastName is the field in the database, 'Schwarseneger' is the value that lastName field is being compared to and 2 is the maximum number of characters (edit distance) that can differ between the lastName field, and the entered value. I can implement the SIMILARTO function in C++ using the Levenshtein distance (http://en.wikipedia.org/wiki/Levenshtein_distance), but how do hook the function in a dll to a mySQL implementation?

    Read the article

  • multiply websites and different websites on the same iis server

    - by Krystian
    I've got this kind of situation: I've got windows 2003 server with dns server on same machine. It is binded to adress for ex. siteA.com Now i want to add to this machine website which name will be siteB.com. I created a new website on IIS6 server with name siteB.com but I dont know how to set up a dns server. My primary DNS administrator created me an alias for my server and he describe it to me like this: 'site siteB.com is an aliase for siteA.com' and then he said that I have to configure my DNS server by my own. I've tried to add new alias in my existing DNS zone (for siteA.com) but it's binding FQDN name like this: siteB.SiteA.com which is wrong as I supose. Can anybody explain me how can I bind this 2 webiste to my server?

    Read the article

  • Subdomains for different applications on Windows Server 2008 R2 with Apache and IIS 7 installed

    - by Yusuf
    I have a home server, on which I have installed Apache, and several other applications that have a Web GUI (JDownloader, Free Download Manager). In order to access each of these apps (whether be it from the local network or the Internet), I have to enter a different port, e.g., http://server:8085 or http://xxxx.dyndns.org:8085 for Apache http://server:90 or http://xxxx.dyndns.org:90 for FDM http://server:8081 or http://xxxx.dyndns.org:8081 for JDownloader I would like to be able to access them using sub-domains, e.g, http://apache.server or http://apache.xxxx.dyndns.org for Apache, http://fdm.server or http://fdm.xxxx.dyndns.org for FDM, http://jdownloader.server or http://jdownloader.xxxx.dyndns.org for JDownloader First of all, would it be possible like I want it, i.e., both from LAN and Internet, and if yes, how? Even if it's possible only for Internet, I would like to know how to do it, if there's a way.

    Read the article

  • Windows 2008 server hosting in Europe

    - by Lasse P
    Hi, I'm searching for a Windows 2008 server in Europe, preferably in Germany or UK or anything with good routing to Denmark (as its where the primary traffic will be generated from). The server will be used as web server (asp.net mvc, php), mail server and database server. We are running a few sites with around 200 concurrent users, which isn't much, but we intend to expand in the near future and the server should be easy to scale in form of adding more RAM and HDD space - if its possible. I think a virtual server may be the best choice - hyper-v or virtuozzo? - considering cost vs specs - but i'm open to suggestions. The max budget is in the range of $1000-1200/year. You guys have any suggestions? Let me know if you need further info.

    Read the article

  • Windows server 2008 rejecting packets from client

    - by l46kok
    We deployed a server application in .NET 4.0 that is going to run on Windows Server 2008 R2. Strangely, the clients cannot connect to the server given an external IP and the server port. I've ran wireshark diagnostics on the server computer and verified that the packets are arriving to the NIC without any issue so it seems Windows Server 2008 is the culprit here. I've tried to temporarily disable the firewall and add the server port into inbound/outbound rule but it still doesn't solve the issue. How can I solve this issue?

    Read the article

  • Windows Storage Server 2008 hangs at logon

    - by ErJab
    We have a Dell PowerVault NX-3000 server running Windows Server 2008. Every now and then, when I try to login, the server seems to hang at the Welcome screen after I type in the password. However, all other services on the server are running fine - users are able to print off the print server and access their files. It just won't let me login. Any idea why this is happening? P.S.: I can't look at the server logs, because it won't let me login in the first place. Remote administration is also disabled on the server, so I can't use Remote Administration tools to look at the logs.

    Read the article

  • Enable FTP on OS X 10.8 Mountain Lion Server

    - by Oleg Trakhman
    There is a LAN comprising several mac machines (iMac, Mac Pro, macbook etc.), Airport Express router and Mac Mini Server running OS X Server 10.8 (Mountain Lion Server). I need to share a folder on Mac Mini Server by FTP. What did I try so far: Made special partition for FTP Access, call it "Reports" So shared folder would be "/Volumes/Reports" Gave access every user and group in system, and also enabled guest access. I checked posix acl, which is "rwxrwxrwx", I checked sharing settings in "Preferences.app" and "Server.app" Checked that users have access to FTP service Enabled FTP in Server.app I tried access to shared folder (by FTP): via Cyberduck via Finder via shell: ftp server.local And what I got: $ ftp [email protected] Trying 10.0.2.2... Connected to server.local. 220 10.0.2.2 FTP server (tnftpd 20100324+GSSAPI) ready. 331 User ftpuser accepted, provide password. Password: 530 User ftpuser may not use FTP. and $ ftp [email protected] Trying 10.0.2.2... Connected to server.local. 220 10.0.2.2 FTP server (tnftpd 20100324+GSSAPI) ready. 331 User admin accepted, provide password. Password: 530 User admin denied by SACL. ftp: Login failed ftp> (admin is administrator account , ftpuser is special user account made to access ftp) What I'm doing wrong? Getting really tired of this...

    Read the article

  • Visual SVN server Running but cannot access / browse repositories

    - by user1783560
    Operating System: Windows Web Server 2008 R2 Visual SVN Version: 2.5.7 Subversion: 1.7.7 Apache: 2.2.22 I freshly installed the Visual SVN latest version on the server and created one repository in it. In the server management window, it shows that the server is up and running but when I try to browse it in a web browser, it doesn't respond. I am not able to import my existing code into the repository: Error: Cannot connect to server open/browse the repository with either command localhost:81/svn OR http://www.myserver.com:81/svn OR http:// myIPAddress:81/svn Visual SVN log is clean. The last information in the server log is that "The server is listening to port 81.

    Read the article

  • Two DHCP servers on the same network

    - by CesarGon
    We are setting up a routing link between the Windows Server 2008 networks of two different buildings in my organisation. Each network uses a different IP addressing scheme (one uses public addresses, the other one uses private), but the goal is having a single Windows Server domain across the gap between the buildings. The link is provided by a 100-Mbps point-to-point line. I have always understood that you should not have more than one DHCP server on a network. However, we are planning to put a domain controller on each building, and each domain controller will be a DNS server and a DHCP server as well. The intention is that a machine booting up in building A gets its IP address from the DHCP server closer to it, in building A, while a machine booting up in building B gets an address from the DHCP server in building B. Since the two buildings will be linked and the network will be only one, will this work? How can I avoid that a machine booting up in building A gets an address from the DHCP server in building B (or vice versa)? Thanks.

    Read the article

  • Two DHCP servers on the same network

    - by CesarGon
    We are setting up a routing link between the Windows Server 2008 networks of two different buildings in my organisation. Each network uses a different IP addressing scheme (one uses public addresses, the other one uses private), but the goal is having a single Windows Server domain across the gap between the buildings. The link is provided by a 100-Mbps point-to-point line. I have always understood that you should not have more than one DHCP server on a network. However, we are planning to put a domain controller on each building, and each domain controller will be a DNS server and a DHCP server as well. The intention is that a machine booting up in building A gets its IP address from the DHCP server closer to it, in building A, while a machine booting up in building B gets an address from the DHCP server in building B. Since the two buildings will be linked and the network will be only one, will this work? How can I avoid that a machine booting up in building A gets an address from the DHCP server in building B (or vice versa)? Thanks.

    Read the article

< Previous Page | 260 261 262 263 264 265 266 267 268 269 270 271  | Next Page >