Search Results

Search found 9845 results on 394 pages for 'ntp servers'.

Page 356/394 | < Previous Page | 352 353 354 355 356 357 358 359 360 361 362 363  | Next Page >

  • cannot modify header information puzzler

    - by outofmyleague_lucy
    hi im out of my league i have written what has now become identical code for two sites hosted on different servers. the first worked perfectly and i have therefore used it to compare the second to.from a login for i am directing to a loginaction.php when i include the db_connect.php ie <?php session_start(); include 'db_connect.php'; $user=$_POST['formUser'] $password=$_POST['formPassword']etc ? i am returning cannot modify header information header info already sent, but when i include the content of the db_connect ie <?php session_start(); $connect = mysql_connect("localhost", "user", "pass"); mysql_select_db(db_name, $connection); $user=$_POST['formUser'] $password=$_POST['formPassword']etc ? it works. any ideas? edit - requested error message Warning: Cannot modify header information - headers already sent by (output started at /home/avenncou/public_html/include/db_connect.php:4) in /home/avenncou/public_html/include/loginaction.php on line 14 at line 14 the is a header("Location: {$_SERVER["HTTP_REFERER"]}"); edit - requested db_connect.php <?php $connection = mysql_connect("localhost", "user", "pass");// or die ("Unable to connect!"); mysql_select_db("db", $connection);// or die ("Unable to select database!"); ?> that is all of it (dies commented out in case thats where the error was)!!!

    Read the article

  • Java Executor: Small tasks or big ones?

    - by Arash Shahkar
    Consider one big task which could be broken into hundreds of small, independently-runnable tasks. To be more specific, each small task is to send a light network request and decide upon the answer received from the server. These small tasks are not expected to take longer than a second, and involve a few servers in total. I have in mind two approaches to implement this using the Executor framework, and I want to know which one's better and why. Create a few, say 5 to 10 tasks each involving doing a bunch of send and receives. Create a single task (Callable or Runnable) for each send & receive and schedule all of them (hundreds) to be run by the executor. I'm sorry if my question shows that I'm lazy to test these and see for myself what's better (at least performance-wise). My question, while looking after an answer to this specific case, has a more general aspect. In situations like these when you want to use an executor to do all the scheduling and other stuff, is it better to create lots of small tasks or to group those into a less number of bigger tasks?

    Read the article

  • ASP.NET Web Application: use 1 or multiple virtual directories

    - by tster
    I am working on a (largish) internal web application which has multiple modules (security, execution, features, reports, etc.). All the pages in the app share navigation, CSS, JS, controls, etc. I want to make a single "Web Application" project, which includes all the pages for the app, then references various projects which will have the database and business logic in them. However, some of the people on the project want to have separate projects for the pages of each module. To make this more clear, this is what I'm advocating to be the projects. /WebInterface* /SecurityLib /ExecutionLib etc... And here is what they are advocating: /SecurityInterface* /SecutiryLib /ExecutionInterface* /ExecutionLib etc... *project will be published to a virtual directory of IIS Basically What I'm looking for is the advantages of both approaches. Here is what I can think of so far: Single Virtual Directory Pros Modules can share a single MasterPage Modules can share UserControls (this will be common) Links to other modules are within the same Virtual directory, and thus don't need to be fully qualified. Less chance of having incompatible module versions together. Multiple Virtual Directories Pros Can publish a new version of a single module without disrupting other modules Module is more compartmentalized. Less likely that changes will break other modules. I don't buy those arguments though. First, using load balanced servers (which we will have) we should be able to publish new versions of the project with zero downtime assuming there are no breaking database changes. Second, If something "breaks" another module, then there is either an improper dependency or the break will show up eventually in the other module, when the developers copy over the latest version of the UserControl, MasterPage or dll. As a point of reference, there are about 10 developers on the project for about 50% of their time. The initial development will be about 9 months.

    Read the article

  • Restrict allowed file upload types PHP

    - by clang1234
    Right now I have a function which takes my uploaded file, checks the extension, and if it matches an array of valid extensions it's processed. It's a contact list importer. What I need to figure out is how to be sure that file (in this case a .csv) is actually what it says it is (ex. not an excel file that just got renamed as a .csv). Our servers run PHP 5.2.13 Here's the current validation function I have public static function validateExtension($file_name,$ext_array) { $extension = strtolower(strrchr($file_name,".")); $valid_extension="FALSE"; if (!$file_name) { return false; } else { if (!$ext_array) { return true; } else { foreach ($ext_array as $value) { $first_char = substr($value,0,1); if ($first_char <> ".") { $extensions[] = ".".strtolower($value); } else { $extensions[] = strtolower($value); } } foreach ($extensions as $value) { if ($value == $extension) { $valid_extension = "TRUE"; } } if ($valid_extension==="TRUE") { return true; } else { return false; } } } }

    Read the article

  • Removing the port number from URL

    - by DrewSSP
    I'm new to anything related to servers and am trying to deploy a django application. Today I bought a domain name for the app and am having trouble configuring it so that the base URL does not need the port number at the end of it. I have to type www.trackthecharts.com:8001 to see the website when I only want to use www.trackethecharts.com. I think the problem is somewhere in my nginx, gunicorn or supervisor configuration. gunicorn_config.py command = '/opt/myenv/bin/gunicorn' pythonpath = '/opt/myenv/top-chart-app/' bind = '162.243.76.202:8001' workers = 3 root@django-app:~# nginx config server { server_name 162.243.76.202; access_log off; location /static/ { alias /opt/myenv/static/; } location / { proxy_pass http://127.0.0.1:8001; proxy_set_header X-Forwarded-Host $server_name; proxy_set_header X-Real-IP $remote_addr; add_header P3P 'CP="ALL DSP COR PSAa PSDa OUR NOR ONL UNI COM NAV"'; } } supervisor config [program:top_chart_gunicorn] command=/opt/myenv/bin/gunicorn -c /opt/myenv/gunicorn_config.py djangoTopChartApp.wsgi autostart=true autorestart=true stderr_logfile=/var/log/supervisor_gunicorn.err.log stdout_logfile=/var/log/supervisor_gunicorn.out.log Thanks for taking a look.

    Read the article

  • NLB and Web Deploy

    - by asgerhallas
    I have two webservers in a cluster serving a web application. Using MS Web Deploy to push a new version of the application to one server, and then again to synchronize the files to the other server in the cluster. It seems to be the most ordinary thing to do. But wouldn't there be a problemm, when one server is deployed with the new version, and the other is not yet finished. Will it not cause troubles, when a page loaded with the new version makes a webservice request and the balancer sends the request to the server with the old version? What's the best way to avoid this? I thought about scripting a drainstop of the server, that we deploy to, and make sure only one server is running at a time. But I can't find anyone else, who seems to have written about such a solution. And guess that it doesn't scale very well too. Another solution could be to shut down all servers when updating. But that doesn't seems very clever. Any suggestions?

    Read the article

  • What programming technique not done by you, was ahead of its time?

    - by Ferds
    There are developers who understand a technology and produce a solution months or years ahead of its time. I worked with a guy who designed an system using C# beta which would monitor different system components on several servers. He used SQL Server that would pick up these system monitoring components (via reflection) and would instantiate them via an NT Service. The simplicity in this design, was that another developer (me), who understood part of a system, would produce a component that could monitor it, as he had the specialized knowledge of that part of the system. I would derive from the monitor base class (to start, stop and log info), install on the monitoring server GAC and then add an entry to the components table in sql server. Then the main engine would pick up this component and do its magic. I understood parts of it, but couldn't work out by derive from a base class, why add to the GAC etc. This was 6 years ago and it took me months to realize what he achieved. What programming technique not done by you was ahead of its time? EDIT : Techniques that you have seen at work or journals/blogs - I don't mean historically over years.

    Read the article

  • Help me understand WebDAV and Autoversioning

    - by Malfist
    I just read the WebDAV Appendex in the O'Reilly Subversion book. I don't quite understand it. It talked about users being able to "mount" WebDAV directories (trees) and manipulate the files like they would normally and on saves the server would automagically create a new revision. The way it explained it, it sounded like it would work for any program, but then at the end of the appendix, it listed a series of programs that worked with WebDAV servers, which leads me to think that maybe it doesn't work like it originally described it. My question is this: How exactly do you interact with a WebDAV repository? Can I do this for example: Copy a file locally via ftp, edit it with notepad++, and then upload it via ftp to the server and have the server do a commit and create a new revision with the file I just edited and uploaded. Also, if that is possible, what happens if two people edit the file locally (on their machines) and uploaded two reversions to the server? With webDAV will I be able to replace Dreamweaver's "Oops, someone edited this before you" with simple ftp uploads and subversion conflict resolutions?

    Read the article

  • Linking Apache to Tomcat with multiple domains.

    - by Royce Thigpen
    Okay, so I've been working for a while on this, and have been searching, but so far I have not found any answers that actually answer what I want to know. I'm a little bit at the end of my rope with this one, but I'm hoping I can figure this out sometime soon. So I have Apache 2 installed and serving up standard webpages, but I also have that linked to a Tomcat instance for one of my domains currently supported. However, I want to add another domain to the server via Apache that points to a separate code base from the one I already have. I have been coming at this from several different angles, and I have determined that I just don't know enough about setting up these servers to really do what I want to do. Little information on my server: Currently running a single Tomcat5.5 instance with Apache 2, using mod_jk to connect them together. I have a worker in workers.properties that points it's "host" field to "localhost" with the correct port my Tomcat instance, so that all works. In my Tomcat server.xml file, I have a host defined as "localhost" that points at my webapp that I am currently serving up, with that host set as the defaultHost as well. One thought I had was that I could add a new worker with a different host than "localhost" (i.e. host2) and then define a new host in my server.xml file called "host2" to match it, but after reading around some on the internet, It seems the "host" of the worker must point to a server, and not a hostname in the Tomcat instance, is this correct? Again, a simple rundown of what I want: Setup in apache/tomcat combo such that www.domain1.com points at "webapp1" and www.domain2.com points at "webapp2".

    Read the article

  • Why would paperclip not assign an ID to my uploaded photos?

    - by Trip
    I just deployed to a cluster server, and my delayed_jobs recipe was overwritten in the process. I solved that, delayed_jobs is up and running but can't find the ID of images that are uploaded. The images are saved correctly : Processing PhotosController#create (for 173.161.167.41 at 2010-06-01 05:09:14) [POST] Parameters: {"Filename"="1.jpg", "gallery_id"="1298", "action"="create", "amp"=nil, "authenticity_token"="qmbnpwFY8a5E3YtS/4fMWF/Z8evCE4hMxqKVJw0I7Ek=", "Upload"="Submit Query", "controller"="photos", "organization_id"="470", "_hq_channel_session"="BAh7CSIYdXNlcl9jcmVkZW50aWFsc19pZGkHIhV1c2VyX2NyZWRlbnRpYWxzIgGAOGRlZDc0NGJlOWU3NTNlNDFlYmVlMDdjMzIzYjA1ZjQxNGE5ZDY4YjNmYjFmNjNkMDQ2OWY2ZDQyOTljZDhiMDFlNmRkMDljNThmMzBmOWJhMTIwNDhkMDI5MTMxYmU5MDczYjIxZmI4YmQxMDVlMTBmNjZmOWFhODE1ZTBjMGM6EF9jc3JmX3Rva2VuIjFxbWJucHdGWThhNUUzWXRTLzRmTVdGL1o4ZXZDRTRoTXhxS1ZKdzBJN0VrPToPc2Vzc2lvbl9pZCIlMjAwMDQ3ZDQ3ZWUyZTgzODIxYzdjOGI3OTdmZGJiMDM=--ac6aa580262938bf5a4d6b9a740722b680eb5d48", "Filedata"=#} [paperclip] Saving attachments. [paperclip] saving /data/HQ_Channel/releases/20100530153454/public/system/photos/9253/original/1.jpg [paperclip] Saving attachments. [paperclip] Saving attachments. Completed in 127ms (View: 2, DB: 91) | 200 OK [http://invent.hqchannel.com/organizations/470/media/galleries/1298/photos?_hq_channel_session=BAh7CSIYdXNlcl9jcmVkZW50aWFsc19pZGkHIhV1c2VyX2NyZWRlbnRpYWxzIgGAOGRlZDc0NGJlOWU3NTNlNDFlYmVlMDdjMzIzYjA1ZjQxNGE5ZDY4YjNmYjFmNjNkMDQ2OWY2ZDQyOTljZDhiMDFlNmRkMDljNThmMzBmOWJhMTIwNDhkMDI5MTMxYmU5MDczYjIxZmI4YmQxMDVlMTBmNjZmOWFhODE1ZTBjMGM6EF9jc3JmX3Rva2VuIjFxbWJucHdGWThhNUUzWXRTLzRmTVdGL1o4ZXZDRTRoTXhxS1ZKdzBJN0VrPToPc2Vzc2lvbl9pZCIlMjAwMDQ3ZDQ3ZWUyZTgzODIxYzdjOGI3OTdmZGJiMDM%3D--ac6aa580262938bf5a4d6b9a740722b680eb5d48&authenticity_token=qmbnpwFY8a5E3YtS%2F4fMWF%2FZ8evCE4hMxqKVJw0I7Ek%3D] And then delayed_jobs keeps spinning around in circles on this one : 2010-06-01T05:09:02-0700: * [Worker(delayed_job host:ip-10-251-197-159 pid:19994)] acquired lock on PhotoJob 2010-06-01T05:09:02-0700: * [JOB] delayed_job host:ip-10-251-197-159 pid:19994 failed with ActiveRecord::RecordNotFound: Couldn't find Photo with ID=9247 - 0 failed attempts 2010-06-01T05:09:02-0700: * [Worker(delayed_job host:ip-10-251-197-159 pid:19994)] acquired lock on PhotoJob 2010-06-01T05:09:02-0700: * [JOB] delayed_job host:ip-10-251-197-159 pid:19994 failed with ActiveRecord::RecordNotFound: Couldn't find Photo with ID=9245 - 0 failed attempts 2010-06-01T05:09:02-0700: * [Worker(delayed_job host:ip-10-251-197-159 pid:19994)] acquired lock on PhotoJob So what I get is that the photos are not being assigned ID's by paperclip. Anyone know where I could poke and pry from here? UPDATE: I created a clone application on a single server. And there are no problems. The images on the cluster do show up (occassionally). If I keep clicking on the folders that lead to photos, it will 50% of the time return a 404 with it not being able to find the photo, and the other half it will present the photo. So the problem has got to be with the server interaction between the ActiveRecord through multiple servers.

    Read the article

  • Task Queue stopped working

    - by pocoa
    I was playing with Goole App Engine Task Queue API to learn how to use it. But I couldn't make it trigger locally. My application is working like a charm when I upload to Google servers. But it doesn't trigger locally. All I see from the admin is the list of the tasks. But when their ETA comes, they just pass it. It's like they runs but they fails and waiting for the retries. But I can't see these events on command line. When I try to click "Run" on admin panel, it runs successfuly and I can see these requests from the command line. I'm using App Engine SDK 1.3.4 on Linux with google-app-engine-django. I'm trying to find the problem from 3 hours now and I couldn't find it. It's also very hard to debug GAE applications. Because debug messages do not appear on console screen. Thanks.

    Read the article

  • Command or tool to display list of connections to a Windows file share

    - by BizTalkMama
    Is there a Windows command or tool that can tell me what users or computers are connected to a Windows fileshare? Here's why I'm looking for this: I've run into issues in the past where our deployment team has deployed BizTalk applications to one of our environments using the wrong bindings, leaving us with two receive locations pointing to the same file share (i.e. both dev and test servers point to dev receive location uri). When this occurs, the two environments in question tend to take turns processing the files received (meaning if I am attempting to debug something in one environment and the other environment has picked the file up, it looks as if my test file has disappeared into thin air). We have several different environments, plus individual developer machines, and I'd rather not have to check each individually to find the culprit. I'm looking for a quick way to detect what locations are connected to the share once I notice my test files vanishing. If I can determine the connections that are invalid, I can go directly to the person responsible for that environment and avoid the time it takes to randomly ask around. Or if the connections appear to be correct, I can go directly to troubleshooting where in the process the message gets lost. Any suggestions?

    Read the article

  • [C#] how to do Exception Handling & Tracing

    - by shrimpy
    Hi all, i am reading some C# books, and got some exercise don't know how to do, or not sure what does the question mean. Problem: After working for a company for some time, your skills as a knowledgeable developer are recognized, and you are given the task of “policing” the implementation of exception handling and tracing in the source code (C#) for an enterprise application that is under constant incremental development. The two goals set by the product architect are: 100% of methods in the entire application must have at least a standard exception handler, using try/catch/finally blocks; more complex methods must also have additional exception handling for specific exceptions All control flow code can optionally write “tracing” information to assist in debugging and instrumentation of the application at run-time in situations where traditional debuggers are not available (eg. on staging and production servers). (i am not quite understand these criterias, i came from the java world, java has two kind of exception, check and unchecked exception. Developer must handle checked exception, and do logging. about unchecked exception, still do logging maybe, but most of the time we just throw it. however here comes to C#, what should i do????) Question for Problem: List rules you would create for the development team to follow, and the ways in which you would enforce rules, to achieve these goals. How would you go about ensuring that all existing code complies with the rules specified by the product architect; in particular, what considerations would impact your planning for the work to ensure all existing code complies?

    Read the article

  • Can I use encrypt web.config with a custom protection provider who's assembly is not in the GAC?

    - by James
    I have written a custom protected configuration provider for my web.config. When I try to encrypt my web.config with it I get the following error from aspnet_iisreg aspnet_regiis.exe -pef appSettings . -prov CustomProvider (This is running in my MSBuild) Could not load file or assembly 'MyCustomProviderNamespace' or one of its dependencies. The system cannot find the file specified. After checking with the Fusion log, I confirm it is checking both the GAC, and 'C:/WINNT/Microsoft.NET/Framework/v2.0.50727/' (the location of aspnet_iisreg). But it cannot find the provider. I do not want to move my component into the GAC, I want to leave the custom assembly in my ApplicationBase to copy around to various servers without having to pull/push from the GAC. Here is my provider configuration in the web.config. <configProtectedData> <providers> <add name="CustomProvider" type="MyCustomProviderNamespace.MyCustomProviderClass, MyCustomProviderNamespace" /> </providers> </configProtectedData> Has anyone got any ideas?

    Read the article

  • Help Optimizing MySQL Table (~ 500,000 records) and PHP Code.

    - by Pyrite
    I have a MySQL table that collects player data from various game servers (Urban Terror). The bot that collects the data runs 24/7, and currently the table is up to about 475,000+ records. Because of this, querying this table from PHP has become quite slow. I wonder what I can do on the database side of things to make it as optomized as possible, then I can focus on the application to query the database. The table is as follows: CREATE TABLE IF NOT EXISTS `people` ( `id` bigint(20) unsigned NOT NULL AUTO_INCREMENT, `name` varchar(40) NOT NULL, `ip` int(4) unsigned NOT NULL, `guid` varchar(32) NOT NULL, `server` int(4) unsigned NOT NULL, `date` int(11) NOT NULL, PRIMARY KEY (`id`), UNIQUE KEY `Person` (`name`,`ip`,`guid`), KEY `server` (`server`), KEY `date` (`date`), KEY `PlayerName` (`name`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 COMMENT='People that Play on Servers' AUTO_INCREMENT=475843 ; I'm storying the IPv4 (ip and server) as 4 byte integers, and using the MySQL functions NTOA(), etc to encode and decode, I heard that this way is faster, rather than varchar(15). The guid is a md5sum, 32 char hex. Date is stored as unix timestamp. I have a unique key on name, ip and guid, as to avoid duplicates of the same player. Do I have my keys setup right? Is the way I'm storing data efficient? Here is the code to query this table. You search for a name, ip, or guid, and it grabs the results of the query and cross references other records that match the name, ip, or guid from the results of the first query, and does it for each field. This is kind of hard to explain. But basically, if I search for one player by name, I'll see every other name he has used, every IP he has used and every GUID he has used. <form action="<?php echo $_SERVER['PHP_SELF']; ?>" method="post"> Search: <input type="text" name="query" id="query" /><input type="submit" name="btnSubmit" value="Submit" /> </form> <?php if (!empty($_POST['query'])) { ?> <table cellspacing="1" id="1up_people" class="tablesorter" width="300"> <thead> <tr> <th>ID</th> <th>Player Name</th> <th>Player IP</th> <th>Player GUID</th> <th>Server</th> <th>Date</th> </tr> </thead> <tbody> <?php function super_unique($array) { $result = array_map("unserialize", array_unique(array_map("serialize", $array))); foreach ($result as $key => $value) { if ( is_array($value) ) { $result[$key] = super_unique($value); } } return $result; } if (!empty($_POST['query'])) { $query = trim($_POST['query']); $count = 0; $people = array(); $link = mysql_connect('localhost', 'mysqluser', 'yea right!'); if (!$link) { die('Could not connect: ' . mysql_error()); } mysql_select_db("1up"); $sql = "SELECT id, name, INET_NTOA(ip) AS ip, guid, INET_NTOA(server) AS server, date FROM 1up_people WHERE (name LIKE \"%$query%\" OR INET_NTOA(ip) LIKE \"%$query%\" OR guid LIKE \"%$query%\")"; $result = mysql_query($sql, $link); if (!$result) { die(mysql_error()); } // Now take the initial results and parse each column into its own array while ($row = mysql_fetch_array($result, MYSQL_NUM)) { $name = htmlspecialchars($row[1]); $people[] = array( 'id' => $row[0], 'name' => $name, 'ip' => $row[2], 'guid' => $row[3], 'server' => $row[4], 'date' => $row[5] ); } // now for each name, ip, guid in results, find additonal records $people2 = array(); foreach ($people AS $person) { $ip = $person['ip']; $sql = "SELECT id, name, INET_NTOA(ip) AS ip, guid, INET_NTOA(server) AS server, date FROM 1up_people WHERE (ip = \"$ip\")"; $result = mysql_query($sql, $link); while ($row = mysql_fetch_array($result, MYSQL_NUM)) { $name = htmlspecialchars($row[1]); $people2[] = array( 'id' => $row[0], 'name' => $name, 'ip' => $row[2], 'guid' => $row[3], 'server' => $row[4], 'date' => $row[5] ); } } $people3 = array(); foreach ($people AS $person) { $guid = $person['guid']; $sql = "SELECT id, name, INET_NTOA(ip) AS ip, guid, INET_NTOA(server) AS server, date FROM 1up_people WHERE (guid = \"$guid\")"; $result = mysql_query($sql, $link); while ($row = mysql_fetch_array($result, MYSQL_NUM)) { $name = htmlspecialchars($row[1]); $people3[] = array( 'id' => $row[0], 'name' => $name, 'ip' => $row[2], 'guid' => $row[3], 'server' => $row[4], 'date' => $row[5] ); } } $people4 = array(); foreach ($people AS $person) { $name = $person['name']; $sql = "SELECT id, name, INET_NTOA(ip) AS ip, guid, INET_NTOA(server) AS server, date FROM 1up_people WHERE (name = \"$name\")"; $result = mysql_query($sql, $link); while ($row = mysql_fetch_array($result, MYSQL_NUM)) { $name = htmlspecialchars($row[1]); $people4[] = array( 'id' => $row[0], 'name' => $name, 'ip' => $row[2], 'guid' => $row[3], 'server' => $row[4], 'date' => $row[5] ); } } // Combine people and people2 into just people $people = array_merge($people, $people2); $people = array_merge($people, $people3); $people = array_merge($people, $people4); $people = super_unique($people); foreach ($people AS $person) { $date = ($person['date']) ? date("M d, Y", $person['date']) : 'Before 8/1/10'; echo "<tr>\n"; echo "<td>".$person['id']."</td>"; echo "<td>".$person['name']."</td>"; echo "<td>".$person['ip']."</td>"; echo "<td>".$person['guid']."</td>"; echo "<td>".$person['server']."</td>"; echo "<td>".$date."</td>"; echo "</tr>\n"; $count++; } // Find Total Records //$result = mysql_query("SELECT id FROM 1up_people", $link); //$total = mysql_num_rows($result); mysql_close($link); } ?> </tbody> </table> <p> <?php echo $count." Records Found for \"".$_POST['query']."\" out of $total"; ?> </p> <?php } $time_stop = microtime(true); print("Done (ran for ".round($time_stop-$time_start)." seconds)."); ?> Any help at all is appreciated! Thank you.

    Read the article

  • Automated test, build and deploy

    - by mike79
    I have visual studio team suite 2008. I was unable to meet the requirements to setup TFS, so I'm using TortoiseSvn and VisualSvn as my version contol in VSTS. I need the system setup to do the following: I neeed to be able to create and track workitems. When updates are made to the current project worked on in VSTS, the updates will be commited back to version control. Tests will be run to see that updates don't break the application. If there's a problem with the update it will be reported back to the developer. If there's no problem with the app, which is a clickonce application, it will automatically be built and deployed to an ftp server. I've never worked with version control, build servers, automated testing and continous intergration. I need to know what needs to be put in place for this type of system. I don't know which combination/stack I should be using: CC.net, TeamCity, Hudson, NAnt, NUnit, MsTest, Trac, BugTracker.net, Ndepend, VisualSvn Server, Perforce, Msdeploy, SCM. I want something that is free/opensource and relatively easy to setup and use. Please suggest a setup that will fit my needs. Any help appreciated

    Read the article

  • PHP Check slave status without mysql_connect timeout issues

    - by Jonathon
    I have a web-app that has a master mysql db and four slave dbs. I want to handle all (or almost all) read-only (SELECT) queries from the slaves. Our load-balancer sends the user to one of the slave machines automatically, since they are also running Apache/PHP and serving webpages. I am using an include file to setup the connection to the databases, such as: //for master server (i.e. - UPDATE/INSERT/DELETE statements) $Host = "10.0.0.x"; $User = "xx"; $Password = "xx"; $Link = mysql_connect( $Host, $User, $Password ); if( !$Link ) ) { die( "Master database is currently unavailable. Please try again later." ); } //this connection can be used for READ-ONLY (i.e. - SELECT statements) on the localhost $Host_Local = "localhost"; $User_Local = "xx"; $Password_Local = "xx"; $Link_Local = mysql_connect( $Host_Local, $User_Local, $Password_Local ); //fail back to master if slave db is down if( !$Link_Local ) ) { $Link_Local = mysql_connect( $Host, $User, $Password ); } I then use $Link for all update queries and $Link_Local as the connection for SELECT statements. Everything works fine until the slave server database goes down. If the local db is down, the $Link_Local = mysql_connect() call takes at least 30 seconds before it gives up on trying to connect to the localhost and returns back to the script. This causes a huge backlog of page serves and basically shuts down the system (due to the extremely slow response time). Does anyone know of a better way to handle connections to slave servers via PHP? Or, is there some kind of timeout function that could be used to stop the mysql_connect call after 2-3 seconds? Thanks for the help. I searched the other mysql_connect threads, but didn't see any that addressed this issue.

    Read the article

  • Transferring HashMap between client and server using Sockets (JAVA)

    - by sar
    I am working on a JAVA project in which there are multiple terminals. These terminals act as client and servers. For example if there are 3 terminals A,B and C.Then at any given point in time one of them say A, will be a client making broadcast request. The other two terminals, B and C, will reply. I am using sockets to make them communicate. Once the reply is received from all the other terminals A will check the pool of channels to see if any one of the channel is free. It takes up the free channel and making it availabilty false. The channelpool is implemented using HashMAp: HashMap channelpool = new HashMap(); channelpool = 1=true, 2=true, 3=false, 4=true, 5=true, 6=true, 7=true, 8=true, 9=true, 10=true So initially all the channels are true, any terminal can take any channel. But once the channel is taken it is set to false for the period of use and then reset to true. Now this Hashmap has to be shared among the distributed terminals. Also it should be kept up to date. I can not used a shared resource among the terminals to store the HashMap.Can someone tell me an easy way to transfer the HashMap between the terminals. I will appreciate if someone could point me to a website which discusses this.

    Read the article

  • What is the simplest way to map a folder on the file system to a url in Tomcat?

    - by Simon
    Here's my problem... I have a small prototype app (happens to be in Grails hosted on AWS) and I want to add the ability of the user to upload a few (max 10) images. I want to persist these images on disk on the server machine, in a folder location which is outside my WAR. I realise that there is probably a super-scalable solution involving more web servers and optimised static asset serving, but for the approximately 100 users I am likely to get, it's really not worth the effort and cost. So, what is the simplest way I can have a virtual folder from my url map to a physical folder on disk? I sort of want... http://myapp.com/static to map to a folder which I can configure e.g. /var/www/static so I can then have in my code... <img src="/static/user1/picture.jpg"/> I don't particularly mind whether the resulting physical folders are directly browsable. Security will eventually be an issue, but it isn't at the start. So, what are my options? I have looked at virtual hosts on the apache site, but it feels more complicated than I need. I don't want to use the Grails static rendering plugins.

    Read the article

  • Update Web Reference in Visual Studio

    - by NeilD
    Hi, I have inherited a web site project that makes use of a number of WCF Web Services hosted on a BizTalk server. We have two environments that I need to deploy this project to, with different URLs for the different BizTalk servers. i.e. In the Staging environment, I need to point the services at xx.xx.xx.101 In the Live environment, I need to point them at xx.xx.xx.102, or whatever. Currently, we've got all of the URLs stored in keys in the web.config file, so that we can change them dynamically... Unfortunately this isn't working! If I change the URL in the web.config to something other than what the project was compiled with, I get an error when calling the service: Server did not recognize the value of HTTP Header SOAPAction: xx.xx.xx.101\ServiceName\MethodName I'm told that the only way they've known to deploy this is to update the web.config URLs, change all of the web references in Visual Studio to match, click on "update web reference" for each reference in Visual Studio, and then compile. It's driving me mad! I've written a pre-build NAnt script to go through and replace all instances of the URL found anywhere in the project directory, and even that isn't making any difference. There must be something else being pulled down from the service when I click the "update reference", but I'm new to working with web services, and so I'm not sure what. Does anyone have any ideas? Is there a way to do this programatically? Thanks.

    Read the article

  • JSR-299 (CDI) configuration at runtime

    - by nsn
    I need to configure different @Alternatives, @Decorators and @Injectors for different runtime environments (think testing, staging and production servers). Right now I use maven to create three wars, and the only difference between those wars are in the beans.xml files. Is there a better way to do this? I do have @Alternative @Stereotypes for the different environments, but even then I need to alter beans.xml, and they don't work for @Decorators (or do they?) Is it somehow possible to instruct CDI to ignore the values in beans.xml and use a custom configuration source? Because then I could for example read a system property or other environment variable. The application exclusively runs in containers that use Weld, so a weld-specific solution would be ok. I already tried to google this but can't seem to find good search terms, and I asked the Weld-Users-Forums, but to no avail. Someone over there suggested to write my own custom extension, but I can't find any API to actually change the container configuration at runtime. I think it would be possible to have some sort of @ApplicationScoped configuration bean and inject that into all @Decorators which could then decide themselves whether they should be active or not and then in order to configure @Alternatives write @Produces methods for every interface with multiple implementations and inject the config bean there too. But this seems to me like a lot of unnecessary work to essentially duplicate functionality already present in CDI?

    Read the article

  • Sql Server Replication: Snapshot vs Merge

    - by Zyphrax
    Background information Let's say I have two database servers, both SQL Server 2008. One is in my LAN (ServerLocal), the other one is on a remote hosting environment (ServerRemote). I have created a database on ServerLocal and have an exact copy of that database on ServerRemote. The database on ServerRemote is part of a web application and I would like to keep it's data up-to-date with the data in the database ServerLocal. ServerLocal is able to communicate with ServerRemote, this is one-way traffic. Communication from ServerRemote to ServerLocal isn't available. Current solution I thought it would be a nice solution to use replication. So I've made ServerLocal a publisher and subscriptions are pushed to the ServerRemote. This works fine, when a snapshot is transfered to ServerRemote the existing data will be purged and the ServerRemote database is once again an exact replica of the database on ServerLocal. The problem Records that exist on ServerRemote that don't exist on ServerLocal are removed. This doesn't matter for most of my tables but in some of my tables I'd like to keep the existing data (aspnet_users for instance), and update the records if necessary. What kind of replication fits my problem?

    Read the article

  • Approach for parsing file and creating dynamic data structure for use by another program

    - by user275633
    All, Background: I have a customer who has some build scripts for their datacenter based on python that I've inherited. I did not work on the original design so I'm sort of limited to some degree on what I can and can't change. That said, my customer has a properties file that they use in their datacenter. Some of the values are used to build their servers and unfortunately they have other applications that also use these values so I cannot change them to make it easier for me. What I want to do is make the scripts more dynamic to distribute more hosts so that I don't have to keep updating the scripts in the future and can just add more hosts to the property file. Unfortunately I can't change the current property file and have to work with it. The property file looks something like this: projectName.ClusterNameServer1.sslport=443 projectName.ClusterNameServer1.port=80 projectName.ClusterNameServer1.host=myHostA projectName.ClusterNameServer2.sslport=443 projectName.ClusterNameServer2.port=80 projectName.ClusterNameServer2.host=myHostB In their deployment scripts they basically have alot of if projectName.ClusterNameServerX where X is some number of entries defined and then do something, e.g.: if projectName.ClusterNameServer1.host != "" do X if projectName.ClusterNameServer2.host != "" do X if projectName.ClusterNameServer3.host != "" do X Then when they add another host (say Serve4) they've added another if statement. Question: What I would like to do is make the scripts more dynamic and parse the properties file and put what I need into some data structure to pass to the deployment scripts and then just iterate over the structure and do my deployment that way so I don't have to constantly add a bunch of if some host# do something. I'm just curious to feed some suggestions as to what others would do to parse the file and what sort of data structure would they use and how they would group things together by ClusterNameServer# or something else. Thanks

    Read the article

  • Cannot create a new VS data connection in Server Explorer

    - by Seventh Element
    I have a local instance of SQL Server 2008 express edition running on my development PC. I'm trying to create a new data connection through Visual Studio Server Explorer. The steps are the following: Right click the "Data Connections" node = Choose Data Source. I select "Microsoft SQL Server" as the data source. The "Add Connection" dialog window appears. I select my local server instance = "Test connection" works fine. I select "AdventureWorks" as the database name = "Test connection" works fine. Next I hit the "Ok" button = Error message: "This server version is not supported. Only servers up to MS SQL Server 2005 are supported." I'm using Visual Studio 2008 Professional Edition. The target framework of the application is ".NET framework 3.5". I have a reference to System.Data (framework v2.0) and cannot find another version of the assembly on my system. Am I referencing the wrong assembly? How can I fix this problem?

    Read the article

  • Help me plan larger Qt project

    - by Pirate for Profit
    I'm trying to create an automated task management system for our company, because they pay me to waste my time. New users will create a "profile", which will store all the working data (I guess serialize everything into xml rite?). A "profile" will contain many different tasks. Tasks are basically just standard computer janitor crap such as moving around files, reading/writing to databases, pinging servers, etc.). So as you can see, a task has many different jobs they do, and also that tasks should run indefinitely as long as the user somehow generates "jobs" for them. There should also be a way to enable/disable (start/pause) tasks. They say create the UI first so... I figure the best way to represent this is with a list-view widget, that lists all the tasks in the profile. Enabled tasks will be bold, disabled will be then when you double-click a task, a tab in the main view opens with all the settings, output, errors,. You can right click a task in the list-view to enable/disable/etc. So each task will be a closable tab, but when you close it just hides. My question is: should I extend from QAction and QTabWidget so I can easily drop tasks in and out of my list-view and tab bar? I'm thinking some way to make this plugin-based, since a lot of the plugins may share similar settings (like some of the same options, but different infos are input). Also, what's the best way to set up threading for this application?

    Read the article

< Previous Page | 352 353 354 355 356 357 358 359 360 361 362 363  | Next Page >