Search Results

Search found 19928 results on 798 pages for 'multiple constructors'.

Page 325/798 | < Previous Page | 321 322 323 324 325 326 327 328 329 330 331 332  | Next Page >

  • How to share code as open source?

    - by Ethel Evans
    I have a little program that I wrote for a local group to handle a somewhat complicated scheduling issue for scheduling multiple meetings in multiple locations that change weekly according to certain criteria. It's a niche need, but I wouldn't be surprised if there are other groups that could use software like this. In fact, we've had requests from others for directions on starting a group like this, and if their groups get as big, they might also want special software to help with scheduling. I plan to continue developing the program and eventually make it an online web app, but a very simple alpha version is completed as a console app. I'd like to make it available as open source, but I have no idea what kind of process I should go through first. Right now, all I have is Java code, not even unit-tested thoroughly. I haven't shown the code to anyone else. There is no documentation. I don't know where I would put the code so others could access it. I don't know anything about licensing it. I don't know what kind of support people will expect from me if I release it as open source. I have no idea what else I should worry about. Can someone outline for me (or post an article(s) that outlines) the process of taking open source software from "coded" to "completed / available"? I really don't want to embarrass myself by doing things weirdly.

    Read the article

  • 32bit Application Memory Usage on 64bit Windows 7

    - by Brian
    I have an early 2012 Macbook Pro with and Intel I7 processor and 16 gigs RAM running Windows 7 Professional 64bit via Bootcamp. I work in Geographical Information Systems as a programmer so most of the applications I am running are 32bit Applications, but tend to use a lot of resources (i.e. ArcGIS, SQL Server Express, Visual Studio, etc.). I have been noticing that when I have multiple instances of either the same 32bit application or different 32bit applications and they are all working on hefty processing tasks, I am still only topping out at about 30% memory use. I understand 32bit applications are limited to less than 4gb RAM, but I assumed that one instance could use its own 4gb while another instance could use another 4gb to take full advantage of all the memory I have installed. Can anyone explain how this works and how I can get my applications to take advantage of all my memory via running multiple instances?

    Read the article

  • Can't play stream from TorrentFlux server

    - by thegreyspot
    I am trying to stream a video from my TorrentFlux-b4rt server. I tried multiple media players, none work. Only VLC was able to produce an error message: input can't be opened: VLC is unable to open the MRL 'mms://..*.*:8080/'. Check the log for details. I have tried multiple computers on different networks and all have the same issue. I am using Windows 7 to play the videos, and the server is Torrentflux-b4rt 1.0-beta2 with ubuntu 9.10.

    Read the article

  • Does RDNS for mail server have to match the mail server hostname exactly?

    - by threecheeseopera
    Typically when setting up a mail server, I create an rDNS record for the mail server IP to match the mail server hostname (ex: mail.example.com). Can I instead set the rDNS ptr to match the parent domain (e.g. example.com), if this server is being used for multiple purposes, and still send mail successfully (i.e. not be classified as spam b/c of mismatched rDNS)? Thanks! EDIT: The article at http://en.wikipedia.org/wiki/Forward_Confirmed_reverse_DNS seems to indicate that it might be more complicated than I had thought. For instance, 1) I did not know that you could have multiple PTR records for a given IP; 2) it appears that as long as each PTR record matches an A record, everything is good (basically nullifying my question). Would you agree?

    Read the article

  • Working with Reporting Services Filters – Part 2: The LIKE Operator

    - by smisner
    In the first post of this series, I introduced the use of filters within the report rather than in the query. I included a list of filter operators, and then focused on the use of the IN operator. As I mentioned in the previous post, the use of some of these operators is not obvious, so I'm going to spend some time explaining them as well as describing ways that you can use report filters in Reporting Services in this series of blog posts. Now let's look at the LIKE operator. If you write T-SQL queries, you've undoubtedly used the LIKE operator to produce a query using the % symbol as a wildcard for multiple characters like this: select * from DimProduct where EnglishProductName like '%Silver%' And you know that you can use the _ symbol as a wildcard for a single character like this: select * from DimProduct where EnglishProductName like '_L Mountain Frame - Black, 4_'   So when you encounter the LIKE operator in a Reporting Services filter, you probably expect it to work the same way. But it doesn't. You use the * symbol as a wildcard for multiple characters as shown here: Expression Data Type Operator Value [EnglishProductName] Text Like *Silver* Note that you don’t have to include quotes around the string that you use for comparison. Books Online has an example of using the % symbol as a wildcard for a single character, but I have not been able to successfully use this wildcard. If anyone has a working example, I’d love to see it!

    Read the article

  • Can Vagrant point to a directory of Puppet manifests for execution?

    - by SeligkeitIstInGott
    I am using Vagrant to jump start some initial Puppet config and am confused on how to include/run multiple manifests (other than just site.pp) in the puppet execution workflow without making the extra manifests into modules and including them that way. In the puppet manifests directory that I point Vagrant to (see below) I have two manifests that I want executed: site.pp and hierasetup.pp. config.vm.provision "puppet" do |puppet| puppet.manifests_path = "puppet_files/manifests" puppet.module_path = "puppet_files/modules" puppet.manifest_file = "site.pp" puppet.options = "--verbose --debug" end Currently I am having site.pp be the manifest that calls hierasetup.pp. My site.pp looks like this: File { owner => 'root', group => 'root', mode => '0644', } import "hierasetup.pp" include jboss But I get this error about the deprecation of "import": Warning: The use of 'import' is deprecated at /tmp/vagrant-puppet-1/manifests/site.pp:33. See http://links.puppetlabs.com/puppet-import-deprecation (at grammar.ra:610:in `_reduce_190') According to the referenced URL under "Things to try instead" it says "To keep your node definitions in separate files, specify a directory as your main manifest". Further this puppet doc on main manifests says: "Recommended: If you’re using the main manifest heavily instead of relying on an ENC, consider changing the manifest setting to $confdir/manifests. This lets you split up your top-level code into multiple files while avoiding the import keyword. It will also match the behavior of simple environments." It appears that Puppet can reference an entire directory instead of just a specific manifest file, such that I would expect that Vagrant would make a provision for this and allow me to drop the "puppet.manifest_file = "site.pp" line and point to the parent directory instead in which all the *.pp files there will be executed. However removing that line in Vagrant merely generates a complaint about an expected "default.pp" in its stead: puppet provisioner: * The configured Puppet manifest is missing. Please specify a path to an existing manifest: /some/path/puppet_files/manifests/default.pp So: Firstly, do I understand the "new" (non-import) way of calling multiple manifests correctly, in that a directory is to be pointed to in which all the *.pp files inside it will be executed? And secondly, has Vagrant "caught up" with this new change to accommodate the referencing of directories in conjunction with Puppet's deprecation of "import"? Update: Thanks to Shane the issue with #2 (Vagrant's code not being caught up to allow pointing to puppet manifest directories) was reported on Vagrant's GitHub issue tracker site and has since been patched: https://github.com/mitchellh/vagrant/issues/4169

    Read the article

  • dual/multi-boot computers and software licensing

    - by Matt
    Suppose you have a computer with two or more operating systems, and a certain piece of software whose license terms allows it to be installed on one computer, and it does a daily check with a remote server to verify that your serial is only used on the original install computer. You install this software on each of your OSes, but since its a different OS the remote server would have to determine that it is not on the same computer, and so would disable your license. So my question, when a license refers to a single computer, does a situation like this usually count as a single computer, or do the multiple OSes sort of make it multiple computers? How do you think a software vendor (specifically thinking AV companies that do this sort of serial check) would handle this situation?

    Read the article

  • What is the best way to render a 2d game map?

    - by Deukalion
    I know efficiency is key in game programming and I've had some experiences with rendering a "map" earlier but probably not in the best of ways. For a 2D TopDown game: (simply render the textures/tiles of the world, nothing else) Say, you have a map of 1000x1000 (tiles or whatever). If the tile isn't in the view of the camera, it shouldn't be rendered - it's that simple. No need to render a tile that won't be seen. But since you have 1000x1000 objects in your map, or perhaps less you probably don't want to loop through all 1000*1000 tiles just to see if they're suppose to be rendered or not. Question: What is the best way to implement this efficiency? So that it "quickly/quicker" can determine what tiles are suppose to be rendered? Also, I'm not building my game around tiles rendered with a SpriteBatch so there's no rectangles, the shapes can be different sizes and have multiple points, say a curved object of 10 points and a texture inside that shape; Question: How do you determine if this kind of objects is "inside" the View of the camera? It's easy with a 48x48 rectangle, just see if it X+Width or Y+Height is in the view of the camera. Different with multiple points. Simply put, how to manage the code and the data efficiently to not having to run through/loop through a million of objects at the same time.

    Read the article

  • Standard ratio of cookies to "visitors"?

    - by Jeff Atwood
    As noted in a recent blog post, We see a large discrepancy between Google Analytics "visitors" and Quantcast "visitors". Also, for reasons we have never figured out, Google Analytics just gets larger numbers than Quantcast. Right now GA is showing more visitors (15 million) on stackoverflow.com alone than Quantcast sees on the whole network (14 million): Why? I don’t know. Either Google Analytics loses cookies sometimes, or Quantcast misses visitors. Counting is an inexact science. We think this is because Quantcast uses a more conservative ratio of cookies-to-visitors. Whereas Google Analytics might consider every cookie a "visitor", Quantcast will only consider every 1.24 cookies a "visitor". This makes sense to me, as people may access our sites from multiple computers, multiple browsers, etcetera. I have two closely related questions: Is there an accepted standard ratio of cookies to visitors? This is obviously an inexact science, but is there any emerging rule of thumb? Is there any more accurate way to count "visitors" to a website other than relying on browser cookies? Or is this just always going to be kind of a best-effort estimation crapshoot no matter how you measure it?

    Read the article

  • RESTFul: state changing actions

    - by Miro Svrtan
    I'am planning to build RESTfull API but there are some architectural questions that are creating some problems in my head. Adding backend bussiness logic to clients is option that I would like to avoid since updating multiple client platforms is hard to maintain in real time when bussiness logic can rapidly change. Lets say we have article as a resource ( api/article ), how should we implement actions like publish, unpublish,activate or deactivate and so on but to try to keep it as simple as possible? 1) Should we use api/article/{id}/{action} since a lot of backend logic can happen there like pushing to remote locations or change of multiple properties. Probably the hardest thing here is that we need to send all article data back to API for updating and multiuser work could not be implemented. For instance editor could send 5 seconds older data and overwrite fix that some other journalist just did 2 seconds ago and there is no way that I could explain to clients this since those publishing an article is really not in any way connected to updating the content. 2) Creating new resource can also be an option, api/article-{action}/id , but then returned resource would not be article-{action} but article which I'am not sure if this is proper. Also in server side code article class is handling actuall work on both resource and I'm not sure if this goes against RESTfull thinking Any suggestions are welcomed..

    Read the article

  • Switch or a Dictionary when assigning to new object

    - by KChaloux
    Recently, I've come to prefer mapping 1-1 relationships using Dictionaries instead of Switch statements. I find it to be a little faster to write and easier to mentally process. Unfortunately, when mapping to a new instance of an object, I don't want to define it like this: var fooDict = new Dictionary<int, IBigObject>() { { 0, new Foo() }, // Creates an instance of Foo { 1, new Bar() }, // Creates an instance of Bar { 2, new Baz() } // Creates an instance of Baz } var quux = fooDict[0]; // quux references Foo Given that construct, I've wasted CPU cycles and memory creating 3 objects, doing whatever their constructors might contain, and only ended up using one of them. I also believe that mapping other objects to fooDict[0] in this case will cause them to reference the same thing, rather than creating a new instance of Foo as intended. A solution would be to use a lambda instead: var fooDict = new Dictionary<int, Func<IBigObject>>() { { 0, () => new Foo() }, // Returns a new instance of Foo when invoked { 1, () => new Bar() }, // Ditto Bar { 2, () => new Baz() } // Ditto Baz } var quux = fooDict[0](); // equivalent to saying 'var quux = new Foo();' Is this getting to a point where it's too confusing? It's easy to miss that () on the end. Or is mapping to a function/expression a fairly common practice? The alternative would be to use a switch: IBigObject quux; switch(someInt) { case 0: quux = new Foo(); break; case 1: quux = new Bar(); break; case 2: quux = new Baz(); break; } Which invocation is more acceptable? Dictionary, for faster lookups and fewer keywords (case and break) Switch: More commonly found in code, doesn't require the use of a Func< object for indirection.

    Read the article

  • Pagination for product listing, what to use? "canonical" or "rel-prev-next" or do nothing?

    - by Jayapal Chandran
    I want to make sure my product listing is 10 products per page which are not in a series (link). They have explained how to use canonical or rel prev for pagination when a long page has been divided into multiple page and the multiple pages becomes a series were as my condition is not that. They are unique listing which are not related to each listing... All the listing links leads to a product profile page. So lets say my site is all about cars and I have a Used Audi page with 1000 Audi's for sale. There are 10 used audi cars on each page so there's 100 pages in the series. If I start to utilise Rel="prev" and rel="next" should I set page 2 onwards as index,follow or noindex,follow? The content on Page 2 all the way to 100 only changes ever so slightly as different cars will be for sale on different pages but from a "Panda" point of view the pages are incredibly similar as they'd hold the same meta data as page 1 in the series along with duplicate reviews & news etc. I want Page 1 in the series as the Main page for Google to send users too and I don't see the point in Google indexing page 2 100. What's everyone's view on this? Lastly with the rel="canonical" tag should page 2 to 100 all point back to page 1 in the series or the individual page itself? E.G: /used-audi/page-3/.

    Read the article

  • Limitations of the SharePoint join using CAML

    - by ybbest
    Limitation One In SharePoint 2010, you can join the primary list to a foreign list and include more than one field from the foreign list. However, the limitation is that the included fields from foreign list have to be the following type: Calculated (treated as plain text) ContentTypeId Counter Currency DateTime Guid Integer Note (one-line only) Number Text The above limitation also explains why you cannot include some types of the fields from the remote list when creating a lookup. Limitation Two When using CAML query to join SharePoint lists, there can be joins to multiple lists, multiple joins to the same list, and chains of joins. However, the limitations are only inner and left outer joins are permitted and the field in the primary list must be a Lookup type field that looks up to the field in the foreign list. Limitation Three The support for writing the JOIN query in CAML is very limited.I have to hand-code the CAML query to join the lists,not fun at all.Although some blogs post mentioned about using LINQ to SharePoint and get the CAML code from there , but I never get it to work.You can check this blog post  for this.Let me know if it works for you. References: http://msdn.microsoft.com/en-us/library/ee535502.aspx http://msdn.microsoft.com/en-us/library/microsoft.sharepoint.spquery.joins.aspx

    Read the article

  • IE does not send NTLM domain

    - by Buddy Casino
    Hi! I have a problem with NTLM single-sign-on with IE8. We've got multiple domain controllers and users from multiple domains that we try to authenticate to a web application via NTLMv1 passthru. Somehow IE fails to send the user's domain in the NTLM Type 1 message. This has the effect that the webapp can not match users properly to their domain controllers, resulting in failed logon attempts, because a user from domain X tries to authenticate to domain controller Y. This problem does not occur with Firefox, as it always sends the correct domain header. So: how do I get IE to send the domain in the NTLM header? Grateful for any help, Michael

    Read the article

  • CodePlex Daily Summary for Saturday, May 17, 2014

    CodePlex Daily Summary for Saturday, May 17, 2014Popular ReleasesSEToolbox: SEToolbox 01.030.008 Release 1: Fixed cube editor failing to apply color to cubes. Added to cube editor, replace cube dialog, and Build Percent dialog. Corrected for hidden asteroid ore, allowing rare ore to show when importing an asteroid, or converting a 3d model to an asteroid (still appears to be limitations on rare ore in small asteroids). Allowed ore selection to Asteroid file import. (Can copy/import and convert existing asteroid to another ore). Added progress bars to common long running operations. Fixed ...Better Robocopy GUI: Command Line GUI for Robocopy: Better Robocopy GUI had become the primary plugin in Command Line GUI built on .NET 4CTI Text Encryption: CTI Text Encryption 5.3: Change Log: - Remove read only behavior of text area of encrypted text. - Minor UI functionality update.Mini SQL Query: Mini SQL Query (1.0.71.456): Minor fixes and template corrections.QuickMon: Version 3.10: Adding the ability to see 'history' of Collector states (including details of errors or warnings at that time). The history size is configurable (default is switched off) and the Windows Service completely ignores keeping history (no UI or user to access it anyway). The Collector stats window now displays this history plus multiple collector stats windows can be opened at the same time. Additionally fixed a bug in the event log collector that reported an 'Error' state when an 'out of bounds' ...TFS Planning and Disaster Recovery Avoidance Guide: v1.4.BETA - TFS, DR and Azure IaaS Planning Guides: Welcome to the TFS Planning and DR Avoidance Guidance What is new? A new crisper, more compact style, which is easier to consume on multiple devices without sacrificing any content. Also included are the new TFS on Azure IaaS guide and supplementary guides. Note Capacity planning workbook and posters are included in the Everything Zip package. Quality-Bar Detail Documentation has been reviewed by Visual Studio ALM Rangers Documentation has been through an independent technical review ...WinAudit: WinAudit Freeware v3.0: WinAudit.exe v3.0 MD5: 88750CCF49FF7418199B2645755830FA Known Issues: 1. Report creation can be very slow when right-to-left (Hebrew) characters are present. 2. Emsisoft Anti-Malware may stop and/or quarantine WinAudit. This happens when WinAudit attempts to obtain a list if running programmes. You will need to set an exception rule in Emsisoft to allow WinAudit to run.Aspose for Sitefinity: Sitefinity Export to Microsoft Word and PDF: Aspose Sitefinity Content Export Add-on allow users to export online content into Microsoft Word or Adobe Acrobat PDF document using Aspose.Words. This Add-on makes it very simple and easy to have an offline copy of your favorite online content for editing, sharing and printing etc. in popular Microsoft Word Doc/Docx or PDF format. It adds simple Export to Word and Export to Pdf buttons at any desired location on the page and clicking the button dynamically exports the content of the page int...MVCwCMS - ASP.NET MVC CMS: MVCwCMS 2.2.2: Updated CKFinder config. For the installation instructions visit the documentation page: https://mvcwcms.codeplex.com/documentationTerraMap (Terraria World Map Viewer): TerraMap 1.0.4: Added support for the new Terraria v1.2.4 update. New items, walls, and tiles Fixed Issue 35206: Hightlight/Find doesn't work for Demon Altars Fixed finding Demon Hearts/Shadow Orbs Added ability to find Enchanted Swords (in the stone) and Water Bolt books Fixed installer not uninstalling older versions The setup file will make sure .NET 4 is installed, install TerraMap, create desktop and start menu shortcuts, add a .wld file association, and launch TerraMap. If you prefer the zip ...WPF Localization Extension: v2.2.1: Issue #9277 Issue #9292 Issue #9311 Issue #9312 Issue #9313 Issue #9314CtrlAltStudio Viewer: CtrlAltStudio Viewer 1.2.1.41167 Release: This release of the CtrlAltStudio Viewer includes the following significant features: Oculus Rift support. Stereoscopic 3D display support. Variable walking / flying speed. Xbox 360 Controller support. Kinect for Windows support. Based on Firestorm viewer 4.6.5 codebase. For more details, see the release notes linked to below. Release notes: http://ctrlaltstudio.com/viewer/release-notes/1-2-1-41167-release Support info: http://ctrlaltstudio.com/viewer/support Privacy policy: http:/...ParserIO: ParserIO v1.0.0.3: Fixed some bug about AIM Symbology IDSSIS SFTP Task Control Flow Component: SSIS SFTP v2 for SQL Server 2012: Please report you to the Documentation page for installation instructions.Grade Calculator For BTEC First Diploma in IT level 2: Grade Calculator For BTEC First Diploma in IT: Grade Calculator For BTEC First Diploma in IT level 2Spaghetti CMS: Version 1.50: New: Backend with new design, bootstrap integration New: Patch function for assigning themes, master page files to pages New: Wysiwyg editor (possible to link css file against editor) and filemanager New: Support time triggered content New: Edit css, js, skin, master file within the CMS New: Dynamically add controls to the CMS New: Change password for an user (usermanagement) New: Localize CMS In the near future we will open our new website for documentation and demo. http://...ExtJS based ASP.NET Controls: FineUI v4.0.6: FineUI(???) ?? ExtJS ??? ASP.NET ??? FineUI??? ?? No JavaScript,No CSS,No UpdatePanel,No ViewState,No WebServices ??????? ?????? IE 8.0+、Chrome、Firefox、Opera、Safari ???? Apache License v2.0 ?:ExtJS ?? GPL v3 ?????(http://www.sencha.com/license) ???? ??:http://fineui.com/ ??:http://fineui.com/bbs/ ??:http://fineui.com/demo/ ??:http://fineui.com/doc/ ??:http://fineui.codeplex.com/ FineUI ???? ExtJS ????????,???? ExtJS ?,?????: 1. ????? FineUI ? ExtJS ? http://fineui.com/bbs/forum.ph...Office App Model Samples: Office App Model Samples v2.0: Office App Model Samples v2.0Readable Passphrase Generator: KeePass Plugin 0.13.0: Version 0.13.0 Added "mutators" which add uppercase and numbers to passphrases (to help complying with upper, lower, number complexity rules). Additional API methods which help consuming the generator from 3rd party c# projects. 13,160 words in the default dictionary (~600 more than previous release).CS-Script for Notepad++ (C# intellisense and code execution): Release v1.0.25.0: Release v1.0.25.0 MemberInfo/MethodInfo popup is now positioned properly to fit the screen In MethodInfo popup method signatures are word-wrapped Implemented Debug text value visualizer Pining sub-values from Watch PanelNew ProjectsAllowing Multiple Attachments for SharePoint 2013 Custom Lists: This code supports basic upload multiple attachments to Custom List item and should be able to attach the existing file name override.Azure File Depot: The Azure File Depot is an effort to provide sample implementations of various tasks related to using blob storage to move files around.CRM Customization Comparer - 2011 and 2013: Compare your Dynamics CRM Solution files via XML Diff. Graphical interface to quickly assess changes. Features: - Graphical UI - Supports CRM 2011 and 2013ElectrosLtd DSA Assignment: This program was created in order to make use of models views and controllersGPS Tracking: A GPS Tracker App.OOP-2112110195: mon:OOP ten:mai duy tanPowerShell Task Control Flow Component for SSIS: Launch PowerShell Script/Commands from SSIS - An another Custom Control Flow Component with the new "PowerShell Task" (currently targets PowerShell 3.0)Quan ly giai bong da: Qu?n lý gi?i bóng dá ngo?i hang AnhSharePoint PDF OpenDocument: SharePoint PDF Content type (Sandbox solution) One OpenDocuments Com+ server for PDF content Adobe software not included but required.Useful Classes: Useful Classes is a DLL that I have written to include a ton of features from one file within your project. ??????-??????【??】??????????: ??????????????,??????????????????,????????,??????????????、????。 ??????-??????【??】??????????: ????????????????????,?????????。????????????,??????,????,????,?????????! ????-????【??】????????: ????????????300??,????????、???????、????、????????、?????,??????:????,????,???????! ?????-?????【??】?????????: ?????????????????????,?? ???????,??????? ???,????? , ???? ?????,?????????????。 ?????-?????【??】?????????: ???????????? , ???? ?????,??????????????,????,????,??????。 ??????-??????【??】??????????: ????????????????,????(??)????????,??????,????,???,????,???????! ??????-??????【??】??????????: ?????????????????????,????????,????????,????,????,??????,???????! ?????-?????【??】?????????: ?????????????????,?????,???,???、???、?????,???,?????,???????????????. ?????-?????【??】?????????: ???????????:???????,????,????,????,??????????,????,????,????。??????... ?????-?????【??】?????????: ??????????????????????,?????、???、????,????,???、???、???、???、???,????,?????! ??????-??????【??】??????????: ????????????、??、???????????,??????,????????,??????????????????...????。 ????-????【??】????????: ??????????????????????,?????????,??????????,????????,?????! ????-????【??】????????: ?????????????、???????,?????????,???????????????,?????????????。 ?????-?????【??】?????????: ?????????????、?????????,?????????,????,????????,????????????????! ?????-?????【??】?????????: ?????????????????,????????,??????????????,?????????,????,????,??????。

    Read the article

  • Building a Redundant / Distributed Application

    - by MattW
    This is more of a "point me in the right direction" question. My team of three and I have built a hosted web app that queues and routes customer chat requests to available customer service agents (It does other things as well, but this is enough background to illustrate the issue). The basic dev architecture today is: a single page ajax web UI (ASP.NET MVC) with floating chat windows (think Gmail) a backend Windows service to queue and route the chat requests this service also logs the chats, calculates service levels, etc a Comet server product that routes data between the web frontend and the backend Windows service this also helps us detect which Agents are still connected (online) And our hardware architecture today is: 2 servers to host the web UI portion of the application a load balancer to route requests to the 2 different web app servers a third server to host the SQL Server DB and the backend Windows service responsible for queuing / delivering chats So as it stands today, one of the web app servers could go down and we would be ok. However, if something would happen to the SQL Server / Windows Service server we would be boned. My question - how can I make this backend Windows service logic be able to be spread across multiple machines (distributed)? The Windows service is written to accept requests from the Comet server, check for available Agents, and route the chat to those agents. How can I make this more distributed? How can I make it so that I can distribute the work of the backend Windows service can be spread across multiple machines for redundancy and uptime purposes? Will I need to re-write it with distributed computing in mind? I should also note that I am hosting all of this on Rackspace Cloud instances - so maybe it is something I should be less concerned about? Thanks in advance for any help!

    Read the article

  • Multi-Resolution Mobile Development

    - by user2186302
    I'm about to start development on my first game for mobile phone (I already have a flash prototype completed so it's jsut a matter of "porting" it to mobile and fixing up the code) and plan on hopefully being able to get the game working on iphones and most android devices. I am using Haxe along with OpenFL and HaxeFlixel for development. My question is: What resolution should I design the game in initially and/or what is the best way to develop a game for multiple resolutions. I have found multiple different methods, the best, in my opinion, being strategy 3 on this page: http://wiki.starling-framework.org/manual/multi-resolution_development. However I have some questions about this. First, what would the best base resolution to use be, the guide suggests 240*320 which seems alright to me, although if I chose to use pixel graphics as I most probably will given I'm using HaxeFlixel, I'm not sure if they'll look too blocky on larger screens which I'm not even sure is a problem as it might still look alright. (Honestly, not sure about that and if anyone has any examples of games that use this method and look nice). Finally, please feel free to share whatever methods you use and think is best. For example, HaxeFlixel has a scaling feature that scales the game to fit the exact screen size, but I'm afraid that would lead to blurry and improperly scaled graphics since it would scale by non-integers. But, I'm not sure how noticeable a problem that may or may not be. Although from experience I'm pretty sure it won't look nice and currently I do not think I'm going to go for this option. So, I would really appreciate any help on this subject. Thank you in advance.

    Read the article

  • Best PerfCounters for monitoring system health of IIS, WCF, WWF and .Net for a Workflow based soluti

    - by Gineer
    We have a solution built in .Net that will be installed into a client environment. The solution will span multiple servers and be running on multiple tiers. The client makes us of MOM (Microsoft operations Manager) to monitor the system. What are the best counters to use for monitoring the overall health of the system? Are there any built in counters that we could add into a MOM Pack (as an Alert) to test a given scenario? Any thoughts suggestions would be much apreciated. Thanks

    Read the article

  • Is there ever a reason to do all an object's work in a constructor?

    - by Kane
    Let me preface this by saying this is not my code nor my coworkers' code. Years ago when our company was smaller, we had some projects we needed done that we did not have the capacity for, so they were outsourced. Now, I have nothing against outsourcing or contractors in general, but the codebase they produced is a mass of WTFs. That being said, it does (mostly) work, so I suppose it's in the top 10% of outsourced projects I've seen. As our company has grown, we've tried to take more of our development in house. This particular project landed in my lap so I've been going over it, cleaning it up, adding tests, etc etc. There's one pattern I see repeated a lot and it seems so mindblowingly awful that I wondered if maybe there is a reason and I just don't see it. The pattern is an object with no public methods or members, just a public constructor that does all the work of the object. For example, (the code is in Java, if that matters, but I hope this to be a more general question): public class Foo { private int bar; private String baz; public Foo(File f) { execute(f); } private void execute(File f) { // FTP the file to some hardcoded location, // or parse the file and commit to the database, or whatever } } If you're wondering, this type of code is often called in the following manner: for(File f : someListOfFiles) { new Foo(f); } Now, I was taught long ago that instantiated objects in a loop is generally a bad idea, and that constructors should do a minimum of work. Looking at this code it looks like it would be better to drop the constructor and make execute a public static method. I did ask the contractor why it was done this way, and the response I got was "We can change it if you want". Which was not really helpful. Anyway, is there ever a reason to do something like this, in any programming language, or is this just another submission to the Daily WTF?

    Read the article

  • Generalist Languages: Dying or Alive and Well?

    - by dsimcha
    Around here, it seems like there's somewhat of a consensus that generalist programming languages (that try to be good at everything, support multiple paradigms, support both very high- and very low-level programming), etc. are a bad idea, and that it's better to pick the right tool for the job and use lots of different languages. I see three major areas where this is flawed: Interfacing multiple languages is always at least a source of friction and is sometimes practically impossible. How severe a problem this is depends on how fine-grained the interfacing is. Near the boundary between the two languages, though, you're basically limited to the intersection of their features, and you have to care about things like binary interfaces that you usually wouldn't. Passing complex data structures (i.e. not just primitives and arrays of primitives) between languages is almost always a hassle. Furthermore, shifting between different syntaxes, different conventions, etc. can be confusing and annoying, though this is a fairly minor complaint. Requirements are never set in stone. I hate picking a language thinking it's the right tool for the job, then realizing that, when some new requirement surfaces, it's actually a terrible choice for that requirement. This has happened to me several times before, usually when working with languages that are very slow, very domain specific and/or has very poor concurrency/parallelism support. When you program in a language for a while, you start to build up a personal toolbox of small utility functions/classes/programs. The value of these goes drastically down if you're forced to use a different language than the one you've accumulated all this code in. What am I missing here? Why shouldn't more focus be placed on generalist languages? Are generalist languages as a category dying or alive and well?

    Read the article

  • What's the proper term for a function inverse to a constructor? Deconstructor, destructor, or something else?

    - by Petr Pudlák
    Edit: I'm rephrasing the question a bit. Apparently I caused some confusion because I didn't realize that the term destructor is used in OOP for something quite different - it's a function invoked when an object is being destroyed. In functional programming we (try to) avoid mutable state so there is no such equivalent to it. (I added the proper tag to the question.) Instead, I've seen that the record field for unwrapping a value (especially for single-valued data types such as newtypes) is sometimes called destructor or perhaps deconstructor. For example, let's have (in Haskell): newtype Wrap = Wrap { unwrap :: Int } Here Wrap is the constructor and unwrap is what? I've seen both, for example: ... Most often, one supplies smart constructors and destructors for these to ease working with them. ... at Haskell wiki, or ... The general theme here is to fuse constructor - deconstructor pairs like ... at Haskell wikibook (here it's probably meant in a bit more general sense). The questions are: How do we call unwrap in functional programming? Deconstructor? Destructor? Or by some other term? And to clarify, is this terminology applicable to other functional languages, or is it used just in the Has

    Read the article

  • With Choice Comes Complexity

    - by BuckWoody
    "Complex" may be defined as "Having many steps, details or parts." Many of Microsoft's products, including SQL Server, can be complex. I'm stating what most data professionals already know - there's usually multiple ways to do things in SQL Server. For instance, to import some data into a table you can use graphical tools, SQLCMD, bcp, SQL Server Integration Services, BULK INSERT, even PowerShell, just to name a few tools at your disposal. That's really not the issue, though. The bigger issue is that there are normally multiple thought-processes, or methods, that you have available for a task. That's both a strength and a weakness. If things were more simple, you would have fewer choices. Sometimes that's a good thing. Just tell me what I need to do and I'll do it. However, your particular situation may not fit that tool or process, so having more options increases your ability to get your job done the way you need to do it. On the other hand, that's more for you to learn, which is harder. There's another side of this benefit/difficulty that you need to be aware of. Even if you're quite good at what you do, keep in mind that the way you know how to do something may not be the only way to do it. Keep your mind open to new possibilities, and most importantly - to new knowledge. SQL Server professionals teach me something new every day. So embrace the complexity - on balance, it's a good thing! Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • How should I structure my database to gain maximum efficiently in this scenario?

    - by Bob Jansen
    I'm developing a PHP script that analyzes the web traffic of my clients websites. By placing a link to a javascript on the clients website (think of Google Analyses), my script harvests information like: the visitors IP address, reference link, current page link, user agent, etc. Now my clients can view these statistics via a control panel that I have build. These clients can also adjust profile settings, set firewall rules, create support tickets and pay invoices. Currently all the the traffic is stored in one table. You can imagine that this tabel would become very large as some my clients receive thousands of pageviews per day. Furthermore, all the traffic data of each client would be stored in the same table, creating a mess. This is the same for the firewall rules currently, and the invoice and support system. I'm looking for way to structure my database in a more organized way to hold large amounts of data of multiple users. This is the first project that I'm developing that deals with so much data, and would like to hear suggestions and tips. I was thinking of using multiple databases to structure the data. The main database will store users data (email,pass,id,etc) admin/website settings. Than each client will have an unique database labeled prefix_userid, which carry tables holding their traffic, invoice, and support ticket data. Would this be a solution, and would it slow down or speed up overall performances (that is spreading the data over muliple databases). I have a solid VPS, but would like to safe and be as effient as possible.

    Read the article

  • Kill xserver from command line (init 3/5 does not work)

    - by John Smith
    Hi, I'm running Linux Mint 10, although I've had this same issue with other variants of Linux. I've been told/found while researching that if the X server hangs or otherwise errors, one can drop to a root prompt, usually at another tty, and execute init 3 (to drop to single user mode) and then init 5 to return to the default, graphical session. Needless to say, I've tried this before in multiple configurations on multiple machines to no avail. The only feedback I receive form executing those two commands is a listing of VMWare services (from a kernel module) that are stopped and then restarted. Note: If I run startx (either before or after init 3), then I am told that the xserver is still running and that I should remove /tmp/.X0-lock. Having tried that, it removes that error message, but claims that the xserver cannot be attached as another instance is running. How do I kill the xserver completely? Can I killall some process name?

    Read the article

< Previous Page | 321 322 323 324 325 326 327 328 329 330 331 332  | Next Page >