Search Results

Search found 32429 results on 1298 pages for 'project layout'.

Page 860/1298 | < Previous Page | 856 857 858 859 860 861 862 863 864 865 866 867  | Next Page >

  • Understanding Visitor Pattern

    - by Nezreli
    I have a hierarchy of classes that represents GUI controls. Something like this: Control-ContainerControl-Form I have to implement a series of algoritms that work with objects doing various stuff and I'm thinking that Visitor pattern would be the cleanest solution. Let take for example an algorithm which creates a Xml representaion of a hierarchy of objects. Using 'classic' approach I would do this: public abstract class Control { public virtual XmlElement ToXML(XmlDocument document) { XmlElement xml = document.CreateElement(this.GetType().Name); // Create element, fill it with attributes declared with control return xml; } } public abstract class ContainerControl : Control { public override XmlElement ToXML(XmlDocument document) { XmlElement xml = base.ToXML(document); // Use forech to fill XmlElement with child XmlElements return xml; } } public class Form : ContainerControl { public override XmlElement ToXML(XmlDocument document) { XmlElement xml = base.ToXML(document); // Fill remaining elements declared in Form class return xml; } } But I'm not sure how to do this with visitor pattern. This is the basic implementation: public class ToXmlVisitor : IVisitor { public void Visit(Form form) { } } Since even the abstract classes help with implementation I'm not sure how to do that properly in ToXmlVisitor. Perhaps there is a better solution to this problem. The reason that I'm considering Visitor pattern is that some algorithms will need references not available in project where the classes are implemented and there is a number of different algorithms so I'm avoiding large classes. Any thoughts are welcome.

    Read the article

  • Automate #include refactoring in C++ [on hold]

    - by Mikhail
    I have a big project with hundreds of files. And as it often happens to C++ projects, #include directives are in messed up. I want to refactor them to increase clarity, decrease compilation time and simplify analysis. For each .h file I want to make sure that: It have #include directives only for types it is using But it have only forward declarations of types that are used as T* or T& For each .cpp file I want to make sure that: It have #include directives only for types it is using and not already included by another headers (no indirect includes when possible) I'm looking for a tool which will help me to automate this refactoring. For now I only know of tools that helps to remove redundant includes, they are many: PC-lint include-what-you-use cppclean ProFactor IncludeManager But I know of no tools to help me to move necessary includes in .h files or replace includes with forward declarations. Any ideas? Tools for Windows and Visual Studio are preferred. Update. Considered to be off-topic. Please, follow the link on Software Recommendations http://softwarerecs.stackexchange.com/q/4461/3331

    Read the article

  • Authentication system brainstorm

    - by gansbrest
    Hi. We got multiple small websites (microsites) and one main high traffic one with big users base. Right now the requirement is to build authentication system which should allow users to loign with the same identity across the network. All website are running on different domains, powered by Drupal 6 CMS and have separate databases (so sharing tables with prefix is not an option + it creates a huge mess in the db). Here is the set of core requirements I came up with: Users should be able to login with the same credentials to all sites within the network User’s data sharing between Main site (storage) and all micro sites within the network Data synchronization across the network when user changes the data (update email or password for example) The login/registration process should be seamless and consistent Register on any of the sites across the network and use that identity to login later on. In the future there might be a need to add openid authentication options. Basically we are looking at something similar stackexchange does, but not sure if they have central users base on not. I was thinking about custom solution which will include 2 parts (modules), one will be stored on the Main site for users data storing and responding to requests from clients. Second part (module) will be placed on each microsite, which is going to send requests to the Master. Some kind of client - server setup. One of the complications I see right away is #3. Data Synhcronization across the network. I just don't want to reinvent the wheel and maybe some work is already done in this direction. Looking forward to your ideas on how to approach this project. EDIT: We use MySQL database

    Read the article

  • Inheriting projects - General Rules?

    - by pspahn
    This is an area of discussion I have long been curious about, but overall, I generally lack the experience to give myself an answer that I would fully trust. We've all been there, a new client shows up with a half-complete project they are looking to finish and launch. For whatever reason, they fired their previous developer, and it's now up to you to save the day. I am just finishing up a code review for a new client, and in my estimation is would be better to scrap what the previous developers built since and start from scratch. There's a ton of reasons why I am leaning toward this way, but it still makes me nervous since the client isn't going to want to hear "those last guys built you a big turd, and I can either polish it, or throw it in the trash". What are your general rules for accepting these projects? How do you determine whether it will be better to start from scratch or continue with the existing code base? What other extra steps might you take to help control client expectations, since the previous developer may have inflated those expectations beyond a reasonable level? Any other general advice?

    Read the article

  • Grid pathfinding with a lot of entities

    - by Vee
    I'd like to explain this problem with a screenshot from a released game, DROD: Gunthro's Epic Blunder, by Caravel Games. The game is turn-based and tile-based. I'm trying to create something very similar (a clone of the game), and I've got most of the fundamentals done, but I'm having trouble implementing pathfinding. Look at the screenshot. The guys in yellow are friendly, and want to kill the roaches. Every turn, every guy in yellow pathfinds to the closest roach, and every roach pathfinds to the closest guy in yellow. By closest I mean the target with the shortest path, not a simple distance calculation. All of this without any kind of slowdown when loading the level or when passing turns. And all of the entities change position every turn. Also (not shown in screenshot), there can be doors that open and close and change the level's layout. Impressive. I've tried implementing pathfinding in my clone. First attempt was making every roach find a path to a yellow guy every turn, using a breadth-first search algorithm. Obviously incredibly slow with more than a single roach, and would get exponentially slower with more than a single yellow guy. Second attempt was mas making every yellow guy generate a pathmap (still breadth-first search) every time he moved. Worked perfectly with multiple roaches and a single yellow guy, but adding more yellow guys made the game slow and unplayable. Last attempt was implementing JPS (jump point search). Every entity would individually calculate a path to its target. Fast, but with a limited number of entities. Having less than half the entities in the screenshot would make the game slow. And also, I had to get the "closest" enemy by calculating distance, not shortest path. I've asked on the DROD forums how they did it, and a user replied that it was breadth-first search. The game is open source, and I took a look at the source code, but it's C++ (I'm using C#) and I found it confusing. I don't know how to do it. Every approach I tried isn't good enough. And I believe that DROD generates global pathmaps, somehow, but I can't understand how every entity find the best individual path to other entities that move every turn. What's the trick? This is a reply I just got on the DROD forums: Without having looked at the code I'd wager it's two (or so) pathmaps for the whole room: One to the nearest enemy, and one to the nearest friendly for every tile. There's no need to make a separate pathmap for every entity when the overall goal is "move towards nearest enemy/friendly"... just mark every tile with the number of moves it takes to the nearest target and have the entity chose the move that takes it to the tile with the lowest number. To be honest, I don't understand it that well.

    Read the article

  • How do I account for changed or forgotten tasks in an estimate?

    - by Andrew
    To handle task-level estimates and time reporting, I have been using (roughly) the technique that Steve McConnell describes in Chapter 10 of Software Estimation. Specifically, when the time comes for me to create task-level estimates (right before coding begins on a project), I determine the tasks at a fairly granular level so that, whenever possible, I have no tasks with a single-point, 50%-confidence estimate greater than four hours. That way, the task estimation process helps with constructing the software while helping me not to forget tasks during estimation. I come up with a range of hours possible for each task also, and using the statistical calculations that McConnell describes along with my historical accuracy data, I can generate estimates at other confidence levels when desired. I feel like this method has been working fairly well for me. We are required to put tasks and their estimates into TFS for tracking, so I use the estimates at the percentage of confidence I am told to use. I am unsure, however, what to do when I do forget a task, or I end up needing to do work that does not neatly fall within one of the tasks I estimated. Of course, trying to avoid this situation is best, but how do I account for forgotten/changed tasks? I want to have the best historical data I can to help me with future estimates, but right now, I basically am just calculating whether I made the 50%-confidence estimate and whether I made it inside the ranged estimate. I'll be happy to clarify what I'm asking if needed -- let me know what is unclear.

    Read the article

  • Double VPN Network Authentication

    - by Pyromanci
    I have a project I'm working on and looking for some info. Right now I have a VPN network using Cisco Pix 501's for the vpn clients and a Cisco VPN Concentrator 3000 for the VPN Server. Since the Pix is constantly connected to the vpn, I want to add a extra level of authentication. Meaning when the user on the other end goes to access anything on the VPN they are asked for a username password before the connection is established. I've never done this sort of structure before. So I'm not even sure where to really being or even if my current hardware can do something like this, or if i need to through in some sort of radius/LDAP/Active Directory type server into the mix.

    Read the article

  • OpenGL CPU vs. GPU

    - by Nitrex88
    So I've always been under the impression that doing work on the GPU is always faster than on the CPU. Because of this, in OpenGL, I usually try to do intensive tasks in shaders so they get the speed boost from the GPU. However, now I'm starting to realize that some things simply work better on the CPU and actually perform worse on the GPU (particularly when a geometry shader is involved). For example, in a recent project I did involving procedurally generated terrain, I tried passing a grid of single triangles into a geometry shader, and tesselated each of these triangles into quads with 400 vertices whose height was determined by a noise function. This worked fine, and looked great, but easily maxed out the GPU with only 25 base triangles and caused a very slow framerate. I then discovered that tesselating on the CPU instead, and setting the height (using noise function) in the vertex shader was actually faster! This prompted me to question the benefits of using the GPU as much as possible... So, I was wondering if someone could describe the general pros and cons of using the GPU vs CPU for intensive graphics tasks. I know this mainly comes down to what your trying to achieve, so if necessary, use the above scenario to discuss why the "CPU + vertex shader" was actually faster than doing everything in the geometry shader on the GPU. It's possible my hardware (newest macbook pro) isn't optomized well for the geometry shader (thus causing the slow framerate). Also, I read that the vertex shader is very good with parallelism, and would love a quick explanation of how this may have played a role in speeding up my procedural terrain. Any info/advice about CPU/GPU/shaders would be awesome!

    Read the article

  • What is the correlation between programming language and experience/skills of their users?

    - by Petr Pudlák
    I'm sure there is such a correlation, because experience and skill leads good programmers to picking languages that are better for them, in which they're more productive, and working in a language forms how programmers think and influences their methods and skills. Is there any research or some statistical data of this phenomenon? Perhaps this is not a purely academic question. For example, if someone is starting a new project, it could be worth considering a language (among other criteria of course) for which there is a higher chance of finding or attracting experienced programmers. Update: Please don't fixate on the last paragraph. It's not my intention to choose a language based on this criterion, and I know there are other far more important ones. My interested is mostly academic. It comes from the (subjective) observation and I wonder if someone has researched it a bit. Also, I'm talking about a correlation, not about a rule. Sure there are both great and terrible programmers in every language. Just that in general it seems to me there is a correlation.

    Read the article

  • Comparing the Performance of Visual Studio's Web Reference to a Custom Class

    As developers, we all make assumptions when programming. Perhaps the biggest assumption we make is that those libraries and tools that ship with the .NET Framework are the best way to accomplish a given task. For example, most developers assume that using ASP.NET's Membership system is the best way to manage user accounts in a website (rather than rolling your own user account store). Similarly, creating a Web Reference to communicate with a web service generates markup that auto-creates a proxy class, which handles the low-level details of invoking the web service, serializing parameters, and so on. Recently a client made us question one of our fundamental assumptions about the .NET Framework and Web Services by asking, "Why should we use proxy class created by Visual Studio to connect to a web service?" In this particular project we were calling a web service to retrieve data, which was then sorted, formatted slightly and displayed in a web page. The client hypothesized that it would be more efficient to invoke the web service directly via the HttpWebRequest class, retrieve the XML output, populate an XmlDocument object, then use XSLT to output the result to HTML. Surely that would be faster than using Visual Studio's auto-generated proxy class, right? Prior to this request, we had never considered rolling our own proxy class; we had always taken advantage of the proxy classes Visual Studio auto-generated for us. Could these auto-generated proxy classes be inefficient? Would retrieving and parsing the web service's XML directly be more efficient? The only way to know for sure was to test my client's hypothesis. Read More >

    Read the article

  • Activate Your Monitor via Motion Trigger

    - by ETC
    Most people are in the habit of jiggling their mouse or tapping their keyboard when they want to wake their monitor. This clever electronics hack adds a sensor to your computer for motion-based monitor activation. At the DIY and electronics blog Radio Etcetera they tackled an interesting project and shared the build guide. Their local volunteer fire department needed a monitor on for quick information checks but they didn’t need it on all the time and they didn’t want to have to walk over and activate the monitor when they needed it. The solution involved hacking a simple infrared security sensor and wiring it via USB to send a mouse command when motion is detected in the room. Fire fighter walks in, monitor turns on and displays information; fire fighter leaves and the monitor goes back to sleep. Hit up the link below to see additional photos, schematics, and the complete build guide. Motion Activated PC Monitor [Radio Etcetera via Hack A Day] Latest Features How-To Geek ETC How To Make Hundreds of Complex Photo Edits in Seconds With Photoshop Actions How to Enable User-Specific Wireless Networks in Windows 7 How to Use Google Chrome as Your Default PDF Reader (the Easy Way) How To Remove People and Objects From Photographs In Photoshop Ask How-To Geek: How Can I Monitor My Bandwidth Usage? Internet Explorer 9 RC Now Available: Here’s the Most Interesting New Stuff Lucky Kid Gets Playable Angry Birds Cake [Video] See the Lord of the Rings Epic from the Perspective of Mordor [eBook] Smart Taskbar Is a Thumb Friendly Android Task Launcher Comix is an Awesome Comics Archive Viewer for Linux Get the MakeUseOf eBook Guide to Speeding Up Windows for Free Need Tech Support? Call the Star Wars Help Desk! [Video Classic]

    Read the article

  • Advantages and Disadvantages of the Waterfall Methodology

    In my personal opinion I believe the waterfall method is one of the worst methodologies to use when developing larger systems because it leaves is no room for mistakes. As the name implies the waterfall methodology does not allow  for projects to go back up stream to recover from design errors, missing and/or limited requirements. In addition, hidden bugs are not usually found until the testing phase. This can prove to be very costly and time consuming to the developer and the client. According to NCycles.com, the waterfall methodology structures a project into separate stages with defined deliverables from each phase. Define Design Code Test Implement Document and Maintain The advantages found by Ncycle.com to this methodology are: Ease in analyzing potential changes  Ability to coordinate larger teams, even if geographically distributed Can enable precise dollar budget Less total time required from Subject Matter Experts The disadvantages found by Ncycle.com to this methodology are: Lack of flexibility Hard to predict all needs in advance Intangible knowledge lost between hand-offs Lack of team cohesion Design flaws not discovered until the Testing phase References: NCycles.com  (2002). Retrieved from http://www.ncycles.com/e_whi_Methodologies.htmmethodology on April 17, 2009

    Read the article

  • How do I get my character to move after adding to JFrame?

    - by A.K.
    So this is kind of a follow up on my other JPanel question that got resolved by playing around with the Layout... Now my MouseListener allows me to add a new Board(); object from its class, which is the actual game map and animator itself. But since my Board() takes Key Events from a Player Object inside the Board Class, I'm not sure if they are being started. Here's my Frame Class, where SideScroller S is the player object: package OurPackage; //Made By A.K. 5/24/12 //Contains Frame. import java.awt.BorderLayout; import java.awt.Button; import java.awt.CardLayout; import java.awt.Color; import java.awt.Container; import java.awt.Dimension; import java.awt.Graphics; import java.awt.Graphics2D; import java.awt.GridBagLayout; import java.awt.GridLayout; import java.awt.Image; import java.awt.Rectangle; import java.awt.event.ActionEvent; import java.awt.event.ActionListener; import java.awt.event.KeyEvent; import java.awt.event.MouseAdapter; import java.awt.event.MouseEvent; import java.awt.event.MouseListener; import javax.swing.*; import javax.swing.plaf.basic.BasicOptionPaneUI.ButtonActionListener; public class Frame implements MouseListener { public static boolean StartGame = false; JFrame frm = new JFrame("Action-Packed Jack"); ImageIcon img = new ImageIcon(getClass().getResource("/Images/ActionJackTitle.png")); ImageIcon StartImg = new ImageIcon(getClass().getResource("/Images/JackStart.png")); public Image Title; JLabel TitleL = new JLabel(img); public JPanel TitlePane = new JPanel(); public JPanel BoardPane = new JPanel(); JPanel cards; JButton StartB = new JButton(StartImg); Board nBoard = new Board(); static Sound nSound; public Frame() { frm.setLayout(new GridBagLayout()); cards = new JPanel(new CardLayout()); nSound = new Sound("/Sounds/BunchaJazz.wav"); TitleL.setPreferredSize(new Dimension(970, 420)); frm.add(TitleL); frm.add(cards); cards.setSize(new Dimension(150, 45)); cards.setLayout(new GridBagLayout ()); cards.add(StartB); StartB.addMouseListener(this); StartB.setPreferredSize(new Dimension(150, 45)); frm.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); frm.setSize(1200, 420); frm.setVisible(true); frm.setResizable(false); frm.setLocationRelativeTo(null); frm.pack(); } public static void main(String[] args) { SwingUtilities.invokeLater(new Runnable() { public void run() { new Frame(); } }); } public void mouseClicked(MouseEvent e) { nSound.play(); StartB.setContentAreaFilled(false); cards.remove(StartB); frm.remove(TitleL); frm.remove(cards); frm.setLayout(new GridLayout(1, 1)); frm.add(nBoard); //Add Game "Tiles" Or Content. x = 1200 nBoard.setPreferredSize(new Dimension(1200, 420)); cards.revalidate(); frm.validate(); } @Override public void mouseEntered(MouseEvent arg0) { // TODO Auto-generated method stub } @Override public void mouseExited(MouseEvent arg0) { // TODO Auto-generated method stub } @Override public void mousePressed(MouseEvent arg0) { // TODO Auto-generated method stub } @Override public void mouseReleased(MouseEvent arg0) { // TODO Auto-generated method stub } }

    Read the article

  • Suitable SDK to develop quick game?

    - by gRnt
    I'm currently undertaking a personal project at home that I need to turn around inside the next few months (which working full time and still learning programming makes it a tad difficult). I'm looking for suggestions on SDK's or tools (preferably free or that come with games, similar to steam tools) that I can use to develop a "game". I'm OK with coding but have no 3D development skills at all. I've very little experience with mod tools or SDK's at all but I'm hoping someone can point me in the direction of one that does the following: A decent library of prefab 3D models to build scenes. Ability to add scripting to the scene I've used Unity before and would prefer to continue to do so however I really have the worst 3D skills imaginable and can't waste time learning them. I'd be looking for pre-fab items that are both industrial and possibly more lush environments (trees etc). If it makes any difference (due to licencing and what-not) I WILL NOT be selling this game or marketing it in any way and I am a University Student if any places do educations licences. Another alternative would be to source free 3d models elsewhere but again while I'm still learning I have no idea where to look if someone could point me in the right direction I'll do the rest of the digging. Thanks

    Read the article

  • What is a generic term for name/identifier? (as opposed to label)

    - by d3vid
    I need to refer to a number of things that have both an identifier value (used in code and configuration), and a human-readable label. These things include: database columns dropdown items subapplications objects stored in a dictionary I want two unambiguous terms. One to refer to the identifier/value/key. One to refer to the label. As you can see, I'm pretty settled on the latter :) For the former, identifier seems best (not everything is strictly a key, and value and name could refer to the label; although, identifier usually refers only to a variable name), but I would prefer to follow an established practice if there is one. Is there an established term for this? (Please provide a source.) If not, are there any examples of a choice from a significant source (Java APIs, MSDN, a big FLOSS project)? (I wasn't sure if this should be posted here or to English Language & Usage. I thought this was the more appropriate expert audience. Happy to migrate if not.)

    Read the article

  • Programmatically disclosing a node in af:tree and af:treeTable

    - by Frank Nimphius
    A common developer requirement when working with af:tree or af:treeTable components is to programmatically disclose (expand) a specific node in the tree. If the node to disclose is not a top level node, like a location in a LocationsView -> DepartmentsView -> EmployeesView hierarchy, you need to also disclose the node's parent node hierarchy for application users to see the fully expanded tree node structure. Working on ADF Code Corner sample #101, I wrote the following code lines that show a generic option for disclosing a tree node starting from a handle to the node to disclose. The use case in ADF Coder Corner sample #101 is a drag and drop operation from a table component to a tree to relocate employees to a new department. The tree node that receives the drop is a department node contained in a location. In theory the location could be part of a country and so on to indicate the depth the tree may have. Based on this structure, the code below provides a generic solution to parse the current node parent nodes and its child nodes. The drop event provided a rowKey for the tree node that received the drop. Like in af:table, the tree row key is not of type oracle.jbo.domain.Key but an implementation of java.util.List that contains the row keys. The JUCtrlHierBinding class in the ADF Binding layer that represents the ADF tree binding at runtime provides a method named findNodeByKeyPath that allows you to get a handle to the JUCtrlHierNodeBinding instance that represents a tree node in the binding layer. CollectionModel model = (CollectionModel) your_af_tree_reference.getValue(); JUCtrlHierBinding treeBinding = (JUCtrlHierBinding ) model.getWrappedData(); JUCtrlHierNodeBinding treeDropNode = treeBinding.findNodeByKeyPath(dropRowKey); To disclose the tree node, you need to create a RowKeySet, which you do using the RowKeySetImpl class. Because the RowKeySet replaces any existing row key set in the tree, all other nodes are automatically closed. RowKeySetImpl rksImpl = new RowKeySetImpl(); //the first key to add is the node that received the drop //operation (departments).            rksImpl.add(dropRowKey);    Similar, from the tree binding, the root node can be obtained. The root node is the end of all parent node iteration and therefore important. JUCtrlHierNodeBinding rootNode = treeBinding.getRootNodeBinding(); The following code obtains a reference to the hierarchy of parent nodes until the root node is found. JUCtrlHierNodeBinding dropNodeParent = treeDropNode.getParent(); //walk up the tree to expand all parent nodes while(dropNodeParent != null && dropNodeParent != rootNode){    //add the node's keyPath (remember its a List) to the row key set    rksImpl.add(dropNodeParent.getKeyPath());      dropNodeParent = dropNodeParent.getParent(); } Next, you disclose the drop node immediate child nodes as otherwise all you see is the department node. Its not quite exactly "dinner for one", but the procedure is very similar to the one handling the parent node keys ArrayList<JUCtrlHierNodeBinding> childList = (ArrayList<JUCtrlHierNodeBinding>) treeDropNode.getChildren();                     for(JUCtrlHierNodeBinding nb : childList){   rksImpl.add(nb.getKeyPath()); } Next, the row key set is defined as the disclosed row keys on the tree so when you refresh (PPR) the tree, the new disclosed state shows tree.setDisclosedRowKeys(rksImpl); AdfFacesContext.getCurrentInstance().addPartialTarget(tree.getParent()); The refresh in my use case is on the tree parent component (a layout container), which usually shows the best effect for refreshing the tree component. 

    Read the article

  • Write DAX queries in Report Builder #ssrs #dax #ssas #tabular

    - by Marco Russo (SQLBI)
    If you use Report Builder with Reporting Services, you can use DAX queries even if the editor for Analysis Services provider does not support DAX syntax. In fact, the DMX editor that you can use in Visual Studio editor of Reporting Services (see a previous post on that), is not available in Report Builder. However, as Sagar Salvi commented in this Microsoft Connect entry, you can use the DAX query text in the query of a Dataset by using the OLE DB provider instead of the Analysis Services one. I think it’s a good idea to show the steps required. First, create a DataSet using the OLE DB connection type, and provide the connection string the provider (Provider), the server name (Data Source) and the database name (Initial Catalog), such as: Provider=MSOLAP;Data Source=SERVERNAME\\TABULAR;Initial Catalog=AdventureWorks Tabular Model SQL 2012 Then, create a Dataset using the data source previously defined, select the Text query type, and write the DAX code in the Query pane: You can also use the Query Designer window, that doesn’t provide any particular help in writing the DAX query, but at least can show a preview of the result of the query execution. I hope DAX will get better editors in the future… in the meantime, remember you can use DAX Studio to write and test your DAX queries, and DAX Formatter to improve their readability!If you want to learn the DAX Query Language, I suggest you watching my video Data Analysis Expressions as a Query Language on Project Botticelli!

    Read the article

  • Critical Threads Optimization

    - by Rafael Vanoni
    Background One of the more common issues we've been seeing in the field is the growing difficulty in optimizing performance of multi-threaded applications. A good portion of this difficulty is due to the increasing complexity of modern processors that present various degrees of sharing relationships between hardware components. Take any current CMT processor and you'll find any number of CPUs sharing execution pipelines, floating point units, caches, etc. Consequently, applying the traditional recipe of one software thread for each CPU will have varying degrees of success, according to the layout of the underlying hardware. On top of this increasing complexity we've also seen processors with features that aim at dynamically resourcing software threads according to their utilization. Intel's Turbo Boost allows processors to increase their operating frequency if there is enough thermal headroom available and the processor isn't fully utilized. More recently, the SPARC T4 processor introduced dynamic threading, allowing each core to dynamically allocate more resources to its active CPUs. Both cases are in essence recognizing that current processors will be running a wide mix of workloads, some will be designed for throughput, others for low latency. The hardware is providing mechanisms to dynamically resource threads according to their runtime behavior. We're very aware of these challenges in Solaris, and have been working to provide the best out of box performance while providing mechanisms to further optimize applications when necessary. The Critical Threads Optimzation was introduced in Solaris 10 8/11 and Solaris 11 as one such mechanism that allows customers to both address issues caused by contention over shared hardware resources and explicitly take advantage of features such as T4's dynamic threading. What it is The basic idea is to allow performance critical threads to execute with more exclusive access to hardware resources. For example, when deploying an application that implements a producer/consumer model, it'll likely be advantageous to give the producer more exclusive access to the hardware instead of having it competing for resources with all the consumers. In the case of a T4 based system, we may want to have a producer running by itself on a single core and create one consumer for each of the remaining CPUs. With the Critical Threads Optimization we're extending the semantics of scheduling priorities (which thread should run first) to include priority over shared resources (which thread should have more "space"). Now the scheduler will not only run higher priority threads first: it will also provide them with more exclusive access to hardware resources if they are available. How does it work ? Using the previous example in Solaris 11, all you'd have to do would be to place the producer in the Fixed Priority (FX) scheduling class at priority 60, or in the Real Time (RT) class at any priority and Solaris will try to give it more "hardware space". On both Solaris 10 8/11 and Solaris 11 this can be achieved through the existing priocntl(1,2) and priocntlset(2) interfaces. If your application already assigns these priorities to performance critical threads, there's no additional step you need to take. One important aspect of this optimization is that it requires some level of idleness in the system, either as a result of sizing the application before hand or through periods of transient idleness during runtime. If the system is fully committed, the scheduler will put all the available CPUs to work.Best practices If you're an application developer, we encourage you to look into assigning the right priorities for the different threads in your application. Solaris provides different scheduling classes (Time Share, Interactive, Fair Share, Fixed Priority and Real Time) that offer different policies and behaviors. It is not always simple to figure out which set of threads are critical to the performance of a workload, and it may not always be feasible to take advantage of this optimization, but we believe that this can be correctly (and safely) done during development. Overall, the out of box performance in Solaris should meet your workload's requirements. If you are looking into that extra bit of performance, then the Critical Threads Optimization may be what you're looking for.

    Read the article

  • BI Applications Test Drive: Joint Partner+Oracle Go To Market Initiatives

    - by Mike.Hallett(at)Oracle-BI&EPM
     A challenge you may be facing is how to easily show the business value of BI to a set of customers.  The key we find to achieve this is to show best in class business analytic examples specific to a business person's role and needs - e.g. "HR analytics" for HR professionals, "Spend Analytics" for procurement professionals, and so on. We have created for you, our specialised partners, the ability to run Oracle BI Applications Test Drive Workshops for your customers. These are carefully scripted to allow a customer business person (usually not IT) to navigate for themselves around a series of dashboards and analysis targetted to show how BI can help their business and drive ROI. These Oracle BI Applications Test Drive kits (in English) are now downloadable from our OMS4P/OPN portal . See it by clicking on this link:http://www.oracle.com/partners/secure/marketing/bi-apps-test-drive-519829.htmlThis kit translation into Italian, French, Spanish and German will be added to this portal soon. NOTE: These are not designed for "training" customers: they really address the need for an effective call to action for any customer you talk to who is in the early stages of exploring their options and the business benefits of a BI project, especially if they are already an Oracle applications customer (eBusiness suite, Peoplesoft, Siebel, JDE). For more demand generation kits see another blog article "Joint Partner+Oracle Go To Market Initiatives: BI Customer Event Kits"

    Read the article

  • What is your strategy for converting RC builds into retail?

    - by Matthew PK
    We're trying to implement a strategy for how we transition our builds from RC to released retail code. When we label a build as a release candidate, we send it to QA for regression. If they approve it, that RC then becomes our released retail code. I liked the idea of "obvious" labeling of versions so that a user knows whether they have a beta or an RC or retail code... where you would have some obvious watermark in non-retail code (think Windows 7 where the RC or non-genuine builds watermark in the bottom right). ... but it seemed strange to us to manipulate the project (to remove the watermark) once it passed regression. If QA certified version a.b.c.d then our retail code should be that same version, not a.b.c.d+1 what strategies have you employed to clearly label non-release software versions without incrementing your build to disable the watermarks in your retail code? One idea I've considered is writing your build to look for a signed file in the installer archive... non-release code wouldn't include this file and so the app would know to display a watermark. But even this seems like QA is then working with non-release code. Ideas?

    Read the article

  • StarterSTS 1.5

    - by Your DisplayName here!
    I have the 1.5 version of StarterSTS sitting here for quite some time now. But I was always reluctant to release it. Some of the reasons are: too many new features for a single (small) version change. to many features that are optional, like bridged authentication and thus make the code very complex. the way I implemented Azure integration adds a dependency on the Azure SDK, even for “on-premise” installations. I don’t like that. the fact I am using some WebForms bits and some WCF bits, the URL structure got messy. WebForms also don’t help a lot in testability All of the above reasons together plus the fact that I am the only architect, developer and tester on this project made me come to the conclusion that I will cancel this release. But wait… StarterSTS 1.5 is fully functional. We use both the on-premise and Azure versions internally “in production”. Cancelling means I will release the latest source code on Codeplex – but will not mark it as a “recommended release”. I also won’t produce updated screen casts and docs. Bu the setup is very similar to earlier versions. Feel free to use and customize 1.5 and give me feedback. On the good news front, I am working on a new version – welcome thinktecture IdentityServer. This version is based on MVC3 and the routing architecture, removed a lot of the clutter, has a SQL CE4 based configuration system, is more extensible – and in overall just cleaner. I will be able to upload CTPs very soon.

    Read the article

  • Small-scale database options for .NET

    - by raney
    I have a .NET 4.0/WPF based application I've developed and maintain for my company that acts as a friendly GUI central-point-of-information, combining information pulled from a couple of SQL databases, as well as CSV exports from a few other applications. I would like to build out my own database to support the entirety of the information that the application accesses, so that I could have a service running on my server that would read in necessary remote SQL info and file exports, to provide the user's application with a single database to connect to, as well as to remove all of the file handling currently involved in the program (copying new CSV resources from network location, reading them into memory each launch.) I have complete control and flexibility here as long as the user's experience isn't affected, and this is as much a learning experience as it is tidying up. Caveat being, I don't have much in the way of a budget. Right now I recognize my options to be: SQL Express - I'm comfortable with the server setup, I like ADO.NET and LINQ to SQL. I feel that I have the least to learn here, but it would let me focus on SQL in a familiar environment. Perhaps in conjunction with Entity Framework? MongoDB - I don't know a whole lot about, but I've heard the name enough to make me curious. Brief research seems friendly enough, and there is .NET support. I like working with open source projects. My questions are: What's popular and extensible right now? I'm not far from starting to job-hunt, and I'd like this project to be relevant going forward. What am I missing? Pros, cons? Other options? What plays well with .NET? What are the things I should be considering, the questions I should be asking, when making a decision like this? Thanks for your time.

    Read the article

  • What's the best way to generate an NPC's face using web technologies?

    - by Vael Victus
    I'm in the process of creating a web app. I have many randomly-generated non-player characters in a database. I can pull a lot of information about them - their height, weight, down to eye color, hair color, and hair style. For this, I am solely interested in generating a graphical representation of the face. Currently the information is displayed with text in the nicest way possible, but I believe it's worth generating these faces for a more... human experience. Problem is, I'm not artist. I wouldn't mind commissioning an artist for this system, but I wouldn't know where to start. Were it 2007, I'd naturally think to myself that using Flash would be the best choice. I'd love to see "breathing" simulated. However, since Flash is on its way out, I'm not sure of a solid solution. With a previous game, I simply used layered .pngs to represent various aspects of the player's body: their armor, the face, the skin color. However, these solutions weren't very dynamic and felt very amateur. I can't go deep into this project feeling like that's an inferior way to present these faces, and I'm certain there's a better way. Can anyone give some suggestion on how to pull this off well?

    Read the article

  • Redhat cluster Vs Pacemaker Vs Gluster Vs Sheepdog

    - by chandank
    Changing the entire question as earlier one was very confusing. I have been exploring different clustering system to run Virtual machines on two different machines on LAN with high availability. Currently I am already using DRBD resource on two different machines on Primary/Secondary mode. In case the primary fails I manually promote the secondary to Primary and start the VM. I also explored Gluster and looks good, however, I would rather prefer clustering over Gluster (user space FS). So if anyone has idea which one would be better from ease of use prospective please I would be interested in. Moreover, sheepdog project appears good, however, could not find much documentations/Howtos. I am using Centos 6.

    Read the article

  • How should developers handle subpar working conditions? [closed]

    - by ivar
    I have been working in my current job for less than a year and at the beginning didn't have the courage to say anything about the things that bothered me. Now I'm a bit fed up and need things to get better. The first problem is not random but I'll mention it anyway. We are running out of space so every new employee gets a smaller table. We are promised that the space problem will be fixed soon. Almost every employee has a different keyboard, mouse, headphones (if any). Mine are $10 keyboard, some random cheap mouse and some random crappy headphones with a mic. All these were used and dirty when I got them. The number of monitor is 1-3 and with different sizes. I have 2 nice monitors and can't complain but some are given 1 small monitor. When it's their first job they don't have the guts to ask for 2 even if most others have 2. Nobody seems to care too. Project manager asked if it's ok? He obviously said he can handle the 1 small one. Then the manager said you can go ask for 1 more. I'm watching this and think go and ask where? The company is trying to hire more people but is not doing much after the person has signed the contract. We are put in one room that is open to the hallway and it's super noisy. Almost like a zoo at times. Even if nobody is talking the crappy keyboards make too much noise. Is this normal? Am I too negative and should I just do my job with what I was given? Should I demand better things? Should the company have some system that everybody gets things in some price range?

    Read the article

< Previous Page | 856 857 858 859 860 861 862 863 864 865 866 867  | Next Page >