Search Results

Search found 25123 results on 1005 pages for 'domain model'.

Page 735/1005 | < Previous Page | 731 732 733 734 735 736 737 738 739 740 741 742  | Next Page >

  • Design in "mixed" languages: object oriented design or functional programming?

    - by dema80
    In the past few years, the languages I like to use are becoming more and more "functional". I now use languages that are a sort of "hybrid": C#, F#, Scala. I like to design my application using classes that correspond to the domain objects, and use functional features where this makes coding easier, more coincise and safer (especially when operating on collections or when passing functions). However the two worlds "clash" when coming to design patterns. The specific example I faced recently is the Observer pattern. I want a producer to notify some other code (the "consumers/observers", say a DB storage, a logger, and so on) when an item is created or changed. I initially did it "functionally" like this: producer.foo(item => { updateItemInDb(item); insertLog(item) }) // calls the function passed as argument as an item is processed But I'm now wondering if I should use a more "OO" approach: interface IItemObserver { onNotify(Item) } class DBObserver : IItemObserver ... class LogObserver: IItemObserver ... producer.addObserver(new DBObserver) producer.addObserver(new LogObserver) producer.foo() //calls observer in a loop Which are the pro and con of the two approach? I once heard a FP guru say that design patterns are there only because of the limitations of the language, and that's why there are so few in functional languages. Maybe this could be an example of it? EDIT: In my particular scenario I don't need it, but.. how would you implement removal and addition of "observers" in the functional way? (I.e. how would you implement all the functionalities in the pattern?) Just passing a new function, for example?

    Read the article

  • Best way to protect website application code

    - by Gaz_Edge
    Background I have a web application that I host on my own server. I have clients who use the application as is, but some have asked if they can host the application on their own server. This enables them to have their own URLS rather than mine. The application only forms part of their website so I'm assuming it will not be possible for my server to respond to a direct call to their domain etc To give some examples, i currently have urls like www.mydomain.com/profile, www.mydomain.com/index.php?option=someoption&view=someview&id=1 What my clients' want is www.theirdomian.com/profile, www.theirdomian.com/index.php?option=someoption&view=someview&id=1 etc Question My question is, what is the best way for me to allow them to use their own URLs with my application, without giving them all the backend source code and databases to install on their server? One way I thought would be to create a router.php file that sits on their server. The router then asks my server to output the html. The router modifies all the links etc in the html source and outputs the new html through the clients server. When a link is clicked on the clients site, the router receives the request and modifies the url to get the data from my server etc. Is this an effective way to achieve what I want, or is it way off the mark.

    Read the article

  • How do I draw a 2d plane and rotate camara (To be a board) in a 3d XNA game?

    - by Mech0z
    I am trying to create a simple board game, but the 3d part of this is really killing me. From what I can gather I have created a plane, but it never moves even though I turn the camara, but that partially makes sense as I only turn the camara with a 3d model, but in my head that makes 0 sense, in my head if I turn the camara it should affect ALL my models? But with this code the camara only "cares" about the 3d cylinder, the plane is just completely still private void OnDraw(object sender, GameTimerEventArgs e) { SharedGraphicsDeviceManager.Current.GraphicsDevice.Clear(Color.CornflowerBlue); foreach (ModelMesh mesh in cylinderModel.Meshes) { foreach (BasicEffect effect in mesh.Effects) { //effect.World = Matrix.CreateRotationX((float)e.TotalTime.TotalSeconds * 2); effect.View = Matrix.CreateLookAt(cameraPosition, Vector3.Zero, Vector3.Up); effect.Projection = Matrix.CreatePerspectiveFieldOfView(MathHelper.ToRadians(45.0f), aspectRatio, 1.0f, 10000.0f); effect.EnableDefaultLighting(); } mesh.Draw(); } //cameraPosition.Z -= 5.0f; _effect.World = Matrix.CreateRotationZ((MathHelper.ToRadians(((float)e.TotalTime.Milliseconds / 2) % 360))); foreach (EffectPass pass in _effect.CurrentTechnique.Passes) { pass.Apply(); SharedGraphicsDeviceManager.Current.GraphicsDevice.DrawUserPrimitives(PrimitiveType.TriangleStrip, _vertices, 0, 1, VertexPositionColor.VertexDeclaration); } } Is there a way to get the camara to affect all models?

    Read the article

  • Why isn't one of the constant buffers being loaded inside the shader?

    - by Paul Ske
    I however got the model to load under tessellation; only problem is that one of the constant buffers aren't actually updating the shader's tessellation factor inside the hullshader. I created a messagebox at the rendering point so I know for sure the tessellation factor is assigned to the dynamic constant buffer. Inside the shader code where it says .Edges[1] = tessellationAmount; the tessellationAmount is suppose to be sent from the dynamic buffer to the shader. Otherwise it's just a plain box. In better explanation; there's a matrixBuffer, cameraBuffer, TessellationBuffer for constant. There's a multiBuffer array that assigns the matrix, camera, tesselation. So, when I set the Hull Shader, PixelShader, VertexShader, DomainShader it gets assigned by the multibuffer. E.G. devcon-HSSetConstantBuffers(0,3,multibuffer); The only way around the whole ideal would be to go in the shader and change how much the edges tessellate and inside the edges as well with the same number. My question is why wouldn't the tessellationBuffer not work in the shader?

    Read the article

  • Redirection & SEO related stuff while moving to a new blog

    - by Karshim Kanwar
    I have a WordPress blog and recently I have setup a new blog lets call the old blog as blog old and new blog as blog new. What I did is moved the content, photos, pictures and all 250 posts from blog old to blog new. Both the blog name are changed as they are pointing to different domain names! I read helpful things in this site itself at here. I will no longer use blog old, moreover I am concerned about the SEO of the blog new. The blog new is fairly new (just 24 hours and no pages have been indexed in Google). I have done the following stuff: Deleted all the post share at Facebook fan Page, Twitter profile, Google+ page and Finally deleted the fan page/Twitter, Google+ page. Edited the link backs of old blog in the blog new. The question I have is: How do I prevent duplicate content issues? Do I go straightaway and delete all the posts in blog old? Should I start sharing the blog posts in blog new? Should I submit the new site to Webmaster Tools or wait for few weeks? Every comment here is appreciated! What issues can I face relating to SEO?

    Read the article

  • How can I reduce the amount of time it takes to fully regression test an application ready for release?

    - by DrLazer
    An app I work on is being developed with a modified version of scrum. If you are not familiar with scrum, it's just an alternative approach to a more traditional watefall model, where a series of features are worked on for a set amount of time known as a sprint. The app is written in C# and makes use of WPF. We use Visual C# 2010 Express edition as an IDE. If we work on a sprint and add in a few new features, but do not plan to release until a further sprint is complete, then regression testing is not an issue as such. We just test the new features and give the app a good once over. However, if a release is planned that our customers can download - a full regression test is factored in. In the past this wasn't a big deal, it took 3 or 4 days and the devs simply fix up any bugs found in the regression phase, but now, as the app is getting larger and larger and incorporating more and more features, the regression is spanning out for weeks. I am interested in any methods that people know of or use that can decrease this time. At the moment the only ideas I have are to either start writing Unit Tests, which I have never fully tried out in a commercial environment, or to research the possibilty of any UI Automation API's or tools that would allow me to write a program to perform a series of batch tests. I know literally nothing about the possibilities of UI automation so any information would be valuable. I don't know that much about Unit testing either, how complicated can the tests be? Is it possible to get Unit tests to use the UI? Are there any other methods I should consider? Thanks for reading, and for any advice in advance.

    Read the article

  • HP dv7 Beats Audio Subwoofer on while headphones plugged in

    - by msilis
    I have looked for answers to this but have not found anything. People have similar problems but mostly with getting the subwoofer to work at all. I got the subwoofer to work by editing the /etc/modprobe.d/alsa-base.conf file and adding options snd-hda-intel model=ref. When I plug in my headphones, the main speakers turn off but the subwoofer still has sound coming through it. The output is selected automatically in sound preferences and is set to headphones when they are plugged in. I also have tried muting the main speakers, which in turn mutes the subwoofer, before plugging in the headphones as they seem to have two different volume controls, but as soon as the headphones are plugged in, the subwoofer has sound coming out of it. I am running Ubuntu 11.10 64 bit on HP dv7- 4285dx. I am able to just not have the subwoofer on but since I have gotten it to work I would like to keep it around and not have to change a config file every time I want to plug in my headphones. Any ideas or suggestions would be greatly appreciated. Thanks

    Read the article

  • Dynamic Monitoring Service (DMS) Configuration Dumping and CPU Utilization

    - by ShawnBailey
    There was recently a report of CPU spikes on a system that were occuring at precise 3 hour intervals. Research revealed that the spikes were the result of the Dynamic Monitoring Service generating a metrics dump and writing it under the server 'logs' folder for every WLS server in the domain. This blog provides some information on what this is for and how to control it. The Dynamic Monitoring Service is a facility in FMw (JRF to be more precise) that collects runtime data on the components deployed to WebLogic. Each component is responsible for how much or how little they use the service and SOA collects a fair amount of information. To view what is collected on any running server you can use the following URL, http://host:port/dms/Spy and login with admin credentials. DMS is essentially always running and collecting this information in the runtime and to protect against loss of this data it also runs automatic backups, by default at the 3 hour interval mentioned above. Most of the management options for DMS are exposed through WLST but these settings are not so we must open the dms_config.xml file which can be found in DOMAIN_HOME/config/fmwconfig/servers/<server_name>/dms_config.xml. The contents are fairly short and at the bottom you will find the following entry: <dumpConfiguration>     <dump intervalSeconds="10800" maxSizeMBytes="75" enabled="true"/> </dumpConfiguration> The interval of 10800 seconds corresponds to the 3 hours and the maximum size is 75MB. The file is written as an archive to DOMAIN_HOME/servers/<server_name>/logs/metrics. This archive contains the dump in XML format. You can disable the dumps all together by simply setting the 'enabled' value to 'false' or of course you could modify the other parameters to suit your needs. Disabling the dumps will NOT impact DMS collections or display at runtime. It will only eliminate these periodic backups.

    Read the article

  • Scripted SOA Diagnostic Dumps for PS6 (11.1.1.7)

    - by ShawnBailey
    When you upgrade to SOA Suite PS6 (11.1.1.7) you acquire a new set of Diagnostic Dumps in addition to what was available in PS5. With more than a dozen to choose from and not wanting to run them one at a time, this blog post provides a sample script to collect them all quickly and hopefully easily. There are several ways that this collection could be scripted and this is just one example. What is Included: wlst.properties: Ant Properties build.xml soa_diagnostic_script.py: Python Script What is Collected: 5 contextual thread dumps at 5 second intervals Diagnostic log entries from the server WLS Image which includes the domain configuration and WLS runtime data Most of the SOA Diagnostic Dumps including those for BPEL runtime, Adapters and composite information from MDS Instructions: Download the package and extract it to a location of your choosing Update the properties file 'wlst.properties' to match your environment Run 'ant' (must be on the path) Collect the zip package containing the files (by default it will be in the script.output location) Properties Reference: oracle_common.common.bin: Location of oracle_common/common/bin script.home: Location where you extracted the script and supporting files script.output: Location where you want the collections written username: User name for server connection pwd: Password to connect to the server url: T3 URL for server connection, '<host>:<port>' dump_interval: Interval in seconds between thread dumps log_interval: Duration in minutes that you want to go back for diagnostic log information Script Package

    Read the article

  • Example: Cross Cutting Concerns of an Application

    A little while ago I was given an opportunity to design and implement a new system that sent data via an HTTP Post method and then processed the results that were returned so that they could be inserted in to a database. My system had eight core concerns that it needed to fulfill. Eight Core Concerns Database Access Data Entities Worker Result Processing Process Flow Manager Email/Notification Error Handling Logging Of these eight, five were actually cross cutting concerns. 5 Cross Cutting Concerns Database Access Data Entities Email/Notification Error Handling Logging These five cross cutting concerns were determined after I created an aspect oriented model to help identity the system components that could be factored out into separate components.  These separated components would then be included in the system so that they could be used by various other components.  These five components allow all of the other components to access the database, store data, send notifications, handle errors, and log all system events.  Thus, these components are used to share unique aspects to the system via their implementation. The use of Aspect oriented architecture greatly helped me define what components I needed to create and what each of those components could do.  It also showed how all of the other aspects depended on each other so that each component did not have to re-implement code that was already created in the existing system.

    Read the article

  • When decomposing a large function, how can I avoid the complexity from the extra subfunctions?

    - by missingno
    Say I have a large function like the following: function do_lots_of_stuff(){ { //subpart 1 ... } ... { //subpart N ... } } a common pattern is to decompose it into subfunctions function do_lots_of_stuff(){ subpart_1(...) subpart_2(...) ... subpart_N(...) } I usually find that decomposition has two main advantages: The decomposed function becomes much smaller. This can help people read it without getting lost in the details. Parameters have to be explicitly passed to the underlying subfunctions, instead of being implicitly available by just being in scope. This can help readability and modularity in some situations. However, I also find that decomposition has some disadvantages: There are no guarantees that the subfunctions "belong" to do_lots_of_stuff so there is nothing stopping someone from accidentally calling them from a wrong place. A module's complexity grows quadratically with the number of functions we add to it. (There are more possible ways for things to call each other) Therefore: Are there useful convention or coding styles that help me balance the pros and cons of function decomposition or should I just use an editor with code folding and call it a day? EDIT: This problem also applies to functional code (although in a less pressing manner). For example, in a functional setting we would have the subparts be returning values that are combined in the end and the decomposition problem of having lots of subfunctions being able to use each other is still present. We can't always assume that the problem domain will be able to be modeled on just some small simple types with just a few highly orthogonal functions. There will always be complicated algorithms or long lists of business rules that we still want to correctly be able to deal with. function do_lots_of_stuff(){ p1 = subpart_1() p2 = subpart_2() pN = subpart_N() return assembleStuff(p1, p2, ..., pN) }

    Read the article

  • How to divide work to a network of computers?

    - by Morpork
    Imagine a scenario as follows: Lets say you have a central computer which generates a lot of data. This data must go through some processing, which unfortunately takes longer than to generate. In order for the processing to catch up with real time, we plug in more slave computers. Further, we must take into account the possibility of slaves dropping out of the network mid-job as well as additional slaves being added. The central computer should ensure that all jobs are finished to its satisfaction, and that jobs dropped by a slave are retasked to another. The main question is: What approach should I use to achieve this? But perhaps the following would help me arrive at an answer: Is there a name or design pattern to what I am trying to do? What domain of knowledge do I need to achieve the goal of getting these computers to talk to each other? (eg. will a database, which I have some knowledge of, be enough or will this involve sockets, which I have yet to have knowledge of?) Are there any examples of such a system? The main question is a bit general so it would be good to have a starting point/reference point. Note I am assuming constraints of c++ and windows so solutions pointing in that direction would be appreciated.

    Read the article

  • Conventions for search result scoring

    - by DeaconDesperado
    I assume this type of question is more on-topic here than on regular SO. I have been working on a search feature for my team's web application and have had a lot of success building a multithreaded, "divide and conquer" processing system to work through a large amount of fulltext. Our problem domain is pretty specific. Users of the app generate posts, and as a general rule, posts that are more recent are considered to be of greater relevance. Some of the data we are trying to extract from search is very specific (user's feelings about specific items or things) and we are using python nltk to do named-entity extraction to find interesting likely query terms. Essentially we look for descriptive adjective-noun pairs and generate a general picture of a user's expressed sentiment as a list of tokens. This search is intended as an internal tool for our team to draw out a local picture of sentiments like "soggy pizza." There's some machine learning in there too to do entity resolution on terms like "soggy" to all manner of adjectives expressing nastiness. My problem is I am at a loss for how to go about scoring these results. The text being searched is split up into tokens in a list, so my initial approach would be to normalize a float score between 0.0-1.0 generated off of how far into the list the terms appear and how often they are repeated (a later mention of the term being worth less, earlier more, greater frequency-greater score, etc.) A certain amount of weight could be given to the timestamp as well, though I am not certain how to calculate this. I am curious if anyone has had to solve a similar problem in a search relevance grading between appreciable metrics (frequency, term location/colocation, recency) and if there are and guidelines for how to weight each. I should mention as well that the final fallback procedure in the search is to pipe the query to Sphinx, which has its own scoring practices. Sphinx operates as the last resort in case our application specific processing can't find any eligible candidates.

    Read the article

  • Focusing and Selecting the Text in ASP.NET TextBox Controls

    When a browser displays the HTML sent from a web server it parses the received markup into a Document Object Model, or DOM, which models the markup as a hierarchical structure. Each element in the markup - the <form> element, <div> elements, <p> elements, <input> elements, and so on - are represented as a node in the DOM and can be programmatically accessed from client-side script. What's more, the nodes that make up the DOM have functions that can be called to perform certain behaviors; what functions are available depend on what type of element the node represents. One function common to most all node types is focus, which gives keyboard focus to the corresponding element. The focus function is commonly used in data entry forms, search pages, and login screens to put the user's keyboard cursor in a particular textbox when the web page loads so that the user can start typing in his search query or username without having to first click the textbox with his mouse. Another useful function is select, which is available for <input> and <textarea> elements and selects the contents of the textbox. This article shows how to call an HTML element's focus and select functions. We'll look at calling these functions directly from client-side script as well as how to call these functions from server-side code. Read on to learn more! Read More >

    Read the article

  • Do you have to recreate workspaces after upgrading a TFS 2008 server to TFS 2010?

    - by Clara Oscura
    I am just reposting this thread from a MSDN forum since it seems to be unavailable. It was very useful when I was having trouble with my folder mappings after migrating to TFS 2010. Question: I opened VS2008 and connected it to the upgraded 2010 TFS server.  Upon clicking any of our Team Projects in source control explorer I get "Team Foundation Error - The workspace MYWORKSPACE;DOMAIN\MYUsername already exists on computer MYPCNAME." Answer: The same local paths on your machine are mapped to 2 different workspaces, one on the preupgrade server and one on the postupgrade server.  It's not safe to have multiple workspaces on different servers mapped to the same local paths b/c you could pend some changes while connected to one server, and the other server would have no idea what you did.  You should either delete your conflicting workspaces from one of the servers (if you don't need them on both), or test the new TFS instance from a new workspace (on different machine). If you want to test an existing production workspace on both servers, then yes, you will have to mess around with the workspace cache. You don’t have to delete the entire cache, you just need to run "tf workspaces /remove:* /server:<serverurl>" to clear the cached workspaces from a server (the command won't delete the workspaces), and possibly "tf workspaces /server:<server>" to refresh the workspace cache for a given server.  You will also have to do back up and restore the workspace before switching servers or your local files could be inconsistent. From the “Microsoft Visual Studio Team Foundation Server 2010 Beta 1” forum (not available anymore?) Technorati Tags: TFS 2010,TFS Workspaces,Team System,Team Foundation Server 2010

    Read the article

  • Directing Multiple ccTLD's to 1 gTLD with a country specific subdirectory?

    - by Pascal Van Opzeeland
    We have multiple ccTLDomains and are thinking about how to best combine these into one. We want to do this to focus our link building efforts. We are running a website through which we offer a software-as-a-service. Therefore we could potentially sell to any country in the world. However, Germany is our most important market. We currently have a .com, .de, .nl. and .pl domain. All these domains have a high amount of unique content pages. What we are planning is to change everything to .com with language-based subdirectories, so .com/en/, .com/de/, etc. I have two questions concerning this issue: 1) How much of an advantage does a ccTLD have over a gTLD with country specific subdirectories in search rankings? So let’s say .de versus .com/de/? 2) How could we best redirect the visitors of our old ccTLD’s to our gTLD’s subdirectories? We would like to loose as few search engine rankings as possible. Thank you for your help.

    Read the article

  • Quick Fix for GlassFish/MySQL NoPasswordCredential Found

    - by MarkH
    Just the other day, I stood up a GlassFish 3.1.2 server in preparation for a new web app we've developed. Since we're using MySQL as the back-end database, I configured it for MySQL (driver) and created the requisite JDBC resource and supporting connection pool. Pinging the finished pool returned a success, and all was well. Until we fired up the app, that is -- in this case, after a weekend. Funny how things seem to break when you leave them alone for a couple of days. :-) Strangely, the error indicated "No PasswordCredential found". Time to re-check that pool. All the usual properties and values were there (URL, driverClass, serverName, databaseName, portNumber, user, password) and were populated correctly. Yes, the password field, too. And it had pinged successfully. So why the problem? A bit of searching online produced enough relevant material to offer promise. I didn't take notes as I was investigating the cause (note to self), but here were the general steps I took to resolve the issue: First, per some guidance I had found, I tried resetting the password value to nothing (using () for a value). Of course, this didn't fix anything; the database account requires a password. And when I tried to put the value back, GlassFish politely refused. Hmm. I'd seen that some folks created a new pool to replace the "broken" one, and while that did work for them, it seemed to simply side-step the issue. So I deleted the password property - which GlassFish allowed me to do - and restarted the domain. Once I was back in, I re-added the password property and its value, saved it, and pinged...success! But now to the app for the litmus test. The web app worked, and everything and everyone was now happy. Not bad for a Monday.  :-D Hope this helps, Mark

    Read the article

  • Location-Based redirection and duplication in sub-directories affecting SEO

    - by Joshua
    I currently own the website www.xyz.com. The website has a sub-directory for each of the 3 target countries: .../en-US/ (United States), .../es-MX/ (Mexico), and .../es-DO/ (Dominican Republic). I have two main questions about this setup: Currently, the main domain/root (xyz.com) contains a blank index.php file, but I would like for a user to be redirected to one of the sub-directories based on their regional location. What is the best way to accomplish this? I have looked at using browser language-based redirection, but how would I know whether to direct a user to the MX or DO site if the browser language is set to spanish? Is there a way to detect a user's geographic location? Also, the 3 websites are practically identical except they all have 3 unique color schemes and the US site is in english while the MX and DO sites are in spanish. My problem is that I believe GoogleBot is penalizing/banning my site because the spanish text on the MX and DO pages are nearly identical and are thus marked as duplicates/spam. Is there a way to avoid this?

    Read the article

  • What’s ‘default’ for?

    - by Strenium
    Sometimes there's a need to communicate explicitly that value variable is yet to be "initialized" or in other words - we’ve never changed it from its' default value. Perhaps "initialized" is not the right word since a value type will always have some sort of value (even a nullable one) but it's just that - how do we tell? Of course an 'int' would be 0, an 'enum' would the first defined value of a given enum and so on – we sure can make this kind of check "by hand" but eventually it would get a bit messy. There's a more elegant way with a use of little-known functionality of: 'default'Let’s just say we have a simple Enum: Simple Enum namespace xxx.Common.Domain{    public enum SimpleEnum    {        White = 1,         Black = 2,         Red = 3    }}   In case below we set the value of the enum to ‘White’ which happens to be a first and therefore default value for the enum. So the snippet below will set value of the ‘isDefault’ Boolean to ‘true’. 'True' Case SimpleEnum simpleEnum = SimpleEnum.White;bool isDefault; /* btw this one is 'false' by default */ isDefault = simpleEnum == default(SimpleEnum) ? true : false; /* default value 'white' */   Here we set the value to ‘Red’ and ‘default’ will tell us whether or not this the default value for this enum type. In this case: ‘false’. 'False' Case simpleEnum = SimpleEnum.Red; /* change from default */isDefault = simpleEnum == default(SimpleEnum) ? true : false; /* value is not default any longer */ Same 'default' functionality can also be applied to DateTimes, value types and other custom types as well. Sweet ‘n Short. Happy Coding!

    Read the article

  • Choosing a JavaScript Asynch-Loader

    - by Prisoner ZERO
    I’ve been looking at various asynchronous resource-loaders and I’m not sure which one to use yet. Where I work we have disparate group-efforts whose class-modules may use different versions of jQuery (etc). As such, nested dependencies may differ, as well. I have no control over this, so this means I need to dynamically load resources which may use alternate versions of the same library. As such, here are my requirements: Load JavaScript and CSS resource files asynchronously. Manage dependency-order and nested-dependencies across versions. Detect if a resource is already loaded. Must allow for cross-domain loading (CDN's) (optional) Allow us to unload a resource. I’ve been looking at: Curl RequireJS JavaScriptMVC LABjs I might be able to fake these requirements myself by loading versions into properly-namespaced variables & using an array to track what is already loaded...but (hopefully) someone has already invented this. So my questions are: Which ones do you use? And why? Are there others that my satisfy my requirements fully? Which do you find most eloquent and easiest to work with? And why?

    Read the article

  • Welcome to the Java Training Beat!

    - by tmcginn
    We are a group of dedicated training developers for Java, located in the US, India, and now Mexico. In this blog we will announce new training content and events that might be of interest to our readers. In this first installment of the Java Training Beat, I would like to introduce three new Oracle By Example (OBE) modules I recently released and posted to the Oracle Online Learning Library. Creating a Simple Java Message Service (JMS) Producer with NetBeans and GlassFish - covers how to create a simple text message producer with NetBeans 7 and GlassFish. Creating Java Message Service (JMS) Resources in WebLogic Server 12c - covers how to create JMS resources using the console and WebLogic Server 12c. With this tutorial, you can replicate the results of the first tutorial in WebLogic. Creating a Publish/Subscribe Model with Message-Driven Beans and GlassFish Server - covers how to create a publish/subscribe application using JMS. This tutorial includes a short case study that includes a JSF front-end application that sends a hotel reservation request object to the server as a MapMessage. Hope you find these useful!  And do check out the Online Learning Library - we have a wide range of additional content posted and more being added every month!

    Read the article

  • How can I tell if I am overusing multi-threading?

    - by exhuma
    NOTE: This is a complete re-write of the question. The text before was way too lengthy and did not get to the point! If you're interested in the original question, you can look it up in the edit history. I currently feel like I am over-using multi-threading. I have 3 types of data, A, B and C. Each A can be converted to multiple Bs and each B can be converted to multiple Cs. I am only interested in treating Cs. I could write this fairly easily with a couple of conversion functions. But I caught myself implementing it with threads, three queues (queue_a, queue_b and queue_c). There are two threads doing the different conversions, and one worker: ConverterA reads from queue_a and writes to queue_b ConverterB reads from queue_b and writes to queue_c Worker handles each element from queue_c The conversions are fairly mundane, and I don't know if this model is too convoluted. But it seems extremely robust to me. Each "converter" can start working even before data has arrived on the queues, and at any time in the code I can just "submit" new As or Bs and it will trigger the conversion pipeline which in turn will trigger a job by the worker thread. Even the resulting code looks simpler. But I still am unsure if I am abusing threads for something simple.

    Read the article

  • SharePoint 2013 Licensing Simplified

    - by Sahil Malik
    SharePoint 2010 Training: more information Before I begin, let me preface this by saying, I don't work for Microsoft, I don't sell SharePoint, this is merely my understanding of the SharePoint 2013 licensing model. As always, before making any money decisions based on the below, talk to your Microsoft rep. The below is just my understanding, you are responsible for any decision you may take. With that aside, here is how I understand SharePoint 2013 licensing. Note that everything below is for on-prem SharePoint only. Also it goes without saying that you need to purchase windows server and SQL server licenses etc. on top of what you read below. The Basics. You need to buy two things - the SharePoint server, and CALs. SharePoint server comes in SharePoint foundation, standard and enterprise. CALs can be either enterprise or standard, and they can be bought as CALs for SharePoint or a CAL suite which includes exchange and lync. CALs can also be purchased and user CAL or device CAL. Read full article ....

    Read the article

  • Ubuntu install can't find hard drives

    - by Casey Hungler
    I recently got a Dell Inspiron Special Edition 7720 computer. I am trying to install Ubuntu along side Windows. When I use the WUBI installer, the installation of Ubuntu works as long as I do not boot into Windows; if I boot into Windows, when I go back into Ubuntu, I am given a variety of error messages which claim to have corrupt or missing kernel/root directory, etc. I have been working with this problem for about a week, and have reinstalled Ubuntu MANY times. So far, I have eliminated all of the following problems: Corrupt WUBI installation (Downloaded multiple times, used on other systems), I have tried using a CD and a flash drive, both of which work on other computers. I know that no program within Ubuntu is creating the problem. I know that others have successfully installed Ubuntu on a computer with my operating system (Windows 7 SP1). This is a much shortened version of the original question, which has been up for about 5 days, and included a more detailed description of the problem, but left everyone clueless as to the source of this problem. When I spoke with the Dell service technician who came over today to replace my keyboard, he suggested that the driver for my HDD was so new that it was not compatible with the current version of Ubuntu. His reasoning is as follows: 1) During an install from a flash drive or CD, where I am supposed to get the option to wipe my system or create a dual boot, I get a window that asks me to select a hard drive partition, but none are listed. 2) This model of computer was made public in June of this year, while Ubuntu was released in April Adopting this theory, it would seem to me that the WUBI install fails after booting into Windows because Ubuntu can no longer find the files that it needs to load. Does this theory seem at all plausible to anyone? I just want to install Ubuntu and have it stay on my computer. I don't care how I put it there, I just need it to work, so I would TRULY appreciate any advice or suggestions anyone could give. Thanks so much for your time and support!!!

    Read the article

  • How do I decide what type of programmer I want to be?

    - by Pearsonartphoto
    I've been working at my current work for some time, and I'm considering a bit of a change of careers. I'm trying to decide what it is exactly that I want to do, and I'm really just not sure. I'm not wanting a solution for my particular case, but what I'd like to know are some generalities of things I can look for. Here are some positions that I'm considering, and what my definitions are (I'm probably calling them something other than what is standard, but hopefully this will do for now). I'm looking for quizzes, articles, explanations, or anything that can help me figure this out. Manager - Managing programmers in some sense, mostly in making sure they are kept working. Coder - A person who is told to make a program do XYZ, and makes it do that. Doesn't have to model anything, or come up with formulas. Algorithm Designer - A person who comes up with a way to make software do something, but doesn't necessarily code that program, at least, not in it's final form. QA - A person who tests code for bugs, preferably with the code in hand. Architect - This person figures out how all of the pieces fit together, is a technical manager of sorts. Maintainer - This person takes someone else's existing code, and makes sure it is fixed when issues arise. Also of some note is figuring out what industry I want to work in. Feel free to add any of your own categories.

    Read the article

< Previous Page | 731 732 733 734 735 736 737 738 739 740 741 742  | Next Page >