Search Results

Search found 14799 results on 592 pages for 'instance eval'.

Page 430/592 | < Previous Page | 426 427 428 429 430 431 432 433 434 435 436 437  | Next Page >

  • Saddling your mountain lion with JDeveloper

    - by Blueberry Coder
    Last October, Apple released Java Update 2012-006. This patch brought the Apple-provided JDK for OS X Lion v10.7 and OS X Mountain Lion v10.8 to version 1.6.0_37. At the same time, it disabled the Apple Java plugins and removed the Java Preferences panel that enabled users to manage the various Java releases on their computer. On the Windows and Linux platforms, JDeveloper 11g R1 has been certified  to run on Java 7 since patch set 5. This is not the case on OS X.   ( The above is not a typo. Apple's OS for personal computer is now known as OS X; the « Mac » prefix has been dropped with the 10.8 release. And it's pronounced « Oh-Ess-Ten », by the way. Yes, I am a nitpicker. I know... ) Please note JDeveloper 11g R2 is not certified either. On any platform. It will generally work, but there are known issues with ADF Mobile. Personally, I would recommend to wait for 12c before going to JDK 7.  Now, suppose you have installed Oracle's JDK 7 on your Mac. JDeveloper will not run on it. It will even not install. Susan and I discovered this the hard way while setting up the ADF Mobile hands-on lab we ran at the UKOUG 2012 conference. The lab was a great success nevertheless, attracting nearly a hundred delegates. It was great to see the interest ADF Mobile already generates, especially among PL/SQL Developers and DBAs. But what did we do to make it work?  While Java Update 2012-006 removed the Java Preferences panel, it leaved in place OS X's command-line Java infrastructure. Thus, it is possible to invoke the Apple JDK 6 to start the JDeveloper installer. Suppose your user is named « Fred », and that the JDeveloper installer is on your desktop. You can execute the following command in a terminal window (on a single line) to start the installer:  /usr/libexec/java_home --version 1.6.0  --exec java -jar /Users/Fred/Desktop/jdevstudio11116install.jar  The JDeveloper installer, being provided a valid JDK reference, will set up the IDE and embedded WebLogic Server instance accordingly. Clever engineering at its finest!

    Read the article

  • Is it okay to have people with multiple roles in a Scrum team?

    - by Wayne M
    I'm evaluating some Agile-style methodologies for possible introduction to my team. With Scrum, is it allowable to have the same person perform multiple roles? We have a small team of four developers and a web designer; we don't really have a lead (I fulfill this role), QA testers or business analysts, and all of our development tasks come from the CIO. Automated testing is seen as a total waste of time, and everything focuses on speed and not quality. What will happen is the CIO will come up with a development task (whether a feature or a bug) and give it to a developer (not to the whole team, to an individual, often in private or out of the blue) who is then expected to get it completed. The CIO doesn't gather requirements beyond the initial idea (and this has bitten us before as we'll implement something only to find out that none of the end users can use the feature, because they weren't consulted or even informed about it before we developed it, and in a panic we'll be told to revert the change) but requires say in/approval of everything that we do. First things first, is a Scrum style something to consider to introduce some standards and practices? From reading, Scrum seems to rely on a bit more trust and communication and focuses more on project management than on development, which is something we are completely devoid of as we don't have any semblance of project management at present. Second, if it can work is it unreasonable for someone, let's say myself, to act as both ScrumMaster and a developer? Or for a developer to also be the Product Owner (although chances are this will be the CIO, who isn't a developer)? I realize the Scrum Master and the Product Owner should be different people but at the same time I don't think we have anyone who has the qualities of a Product Owner (chances are it would turn into a "I need all these stories, I don't care how but get it done" type of deal and/or any freeze would be unfrozen on a whim). It seems to me that I might need to pick and choose pieces of Scrum/XP/Lean to compensate for how things are done currently, as it's highly unlikely that the mentality can be changed; for instance Pair Programming would never fly (seen as a waste, you get half the tasks done if you need two people for everything), TDD would be a hard sell, but short cycles would be welcomed.

    Read the article

  • New Worklist features on 12.1.3

    - by Vijay Shanmugam
    Following new Worklist features are available on E-Business Suite 12.1.3 via Patch 13646173. Ability to view comments on top of a notification If an action is performed on a notification such as Reassign, Request for Information or Provide Information, the recipient of the notification will see who performed the last action and the associated comment on top of the notification. Reassigning a request for information notification If an approver requests more information on a notification from it's submitter, the submitter now has two options Answer Request for More Information Transfer Request for More Information If the submitter thinks the requested information can be provided by another user, he/she can transfer the request to the other user. Please note that only Transfer is supported for Request for More Information. Once transferred, the submitter cannot access the notification and provide the requested information. Use actual sent date when reassigning a notification The Sent field in notification header always showed the date on which the notification was first created. If the notification was later reassigned, the Sent date was not updated to show the last action date. This caused problems in following scenario Approval notification was sent to JACK on 01-JAN-2012 JACK waited for 10 days before reassigning to JILL on 10-JAN-2012 JILL does not see the notification as sent on 10-JAN-2012, instead sees it as sent on 01-JAN-2012 Although the notification was originally created on 01-JAN-2012, it was sent to JILL only on 10-JAN-2012 The enhancement now shows the correct sent date in Worklist and Notification Details page. Figure 1 - Depicts all the above 3 features Related Action History for response required notification So far it was possible to embed Action History of an response-required notification into another FYI notification using #RELATED_HISTORY attribute (Please refer to Workflow Developer Guide for details about this attribute). The enhancement now enables developers to embed Action History of one response-required notification into another response-required notification. To embed Action History of one response-required notification into another, create message attribute #RELATED_HISTORY. To this attribute set a value during run-time in the following format. {TITLE}[ITEM_TYPE:ITEM_KEY]PROCESS_NAME:ACTIVITY_LABEL_NAMEThe TITLE, ITEM_TYPE and ITEM_KEY are optional values. TITLE is used as Related Action History header title. If TITLE is not present, then a default title "Related Action History" is shown. If ITEM_TYPE is present and ITEM_KEY is not, For Example: {TITLE}[ITEM_TYPE]PROCESS_NAME:ACTIVITY_LABEL_NAME , the Related Action History is populated from parent item type of the current item. If both ITEM_TYPE and ITEM_KEY is present, For Example: {TITLE}[ITEM_TYPE:ITEM_KEY]PROCESS_NAME:ACTIVITY_LABEL_NAME , the Related Action History is populated from that specific instance activity. Figure 2 - Depicts Related Action History feature

    Read the article

  • Where I missed boot.properties.?

    - by Dyade, Shailesh M
    Today one of my customer was trying to start the WebLogic Server ( Production Instance) , though he was trying to start the server in a standard way, but it was failing due to below error :   ####<Oct 22, 2012 12:14:43 PM BST> <Warning> <Security> <BanifB1> <> <main> <> <> <> <1350904483998> <BEA-090066> <Problem handling boot identity. The following exception was generated: weblogic.security.internal.encryption.EncryptionServiceException: weblogic.security.internal.encryption.EncryptionServiceException: [Security:090219]Error decrypting Secret Key java.security.ProviderException: setSeed() failed> And it started failing into below causes. ####<Oct 22, 2012 12:16:45 PM BST> <Critical> <WebLogicServer> <BanifB1> <AdminServer> <main> <<WLS Kernel>> <> <> <1350904605837> <BEA-000386> <Server subsystem failed. Reason: java.lang.AssertionError: java.lang.reflect.InvocationTargetException java.lang.AssertionError: java.lang.reflect.InvocationTargetException weblogic.security.internal.encryption.EncryptionServiceException: weblogic.security.internal.encryption.EncryptionServiceException: [Security:090219]Error decrypting Secret Key java.security.ProviderException: setSeed() failed weblogic.security.internal.encryption.EncryptionServiceException: [Security:090219]Error decrypting Secret Key java.security.ProviderException: setSeed() failed at weblogic.security.internal.encryption.JSafeSecretKeyEncryptor.decryptSecretKey(JSafeSecretKeyEncryptor.java:121) Customer was facing this issue without any changes in the system, it was stable suddenly started seeing this issue last night. When we checked, customer was manually entering the username and password, config.xml had the entries encrypted However when verified, customer had the boot.properties at the Servers/AdminServer/security folder and DomainName/security didn't have this file. Adding boot.properies fixed the issue. Regards Shailesh Dyade 

    Read the article

  • Achieving decoupling in Model classes

    - by Guven
    I am trying to test-drive (or at least write unit tests) my Model classes but I noticed that my classes end up being too coupled. Since I can't break this coupling, writing unit tests is becoming harder and harder. To be more specific: Model Classes: These are the classes that hold the data in my application. They resemble pretty much the POJO (plain old Java objects), but they also have some methods. The application is not too big so I have around 15 model classes. Coupling: Just to give an example, think of a simple case of Order Header - Order Item. The header knows the item and the item knows the header (needs some information from the header for performing certain operations). Then, let's say there is the relationship between Order Item - Item Report. The item report needs the item as well. At this point, imagine writing tests for Item Report; you need have a Order Header to carry out the tests. This is a simple case with 3 classes; things get more complicated with more classes. I can come up with decoupled classes when I design algorithms, persistence layers, UI interactions, etc... but with model classes, I can't think of a way to separate them. They currently sit as one big chunk of classes that depend on each other. Here are some workarounds that I can think of: Data Generators: I have a package that generates sample data for my model classes. For example, the OrderHeaderGenerator class creates OrderHeaders with some basic data in it. I use the OrderHeaderGenerator from my ItemReport unit-tests so that I get an instance to OrderHeader class. The problem is these generators get complicated pretty fast and then I also need to test these generators; defeating the purpose a little bit. Interfaces instead of dependencies: I can come up with interfaces to get rid of the hard dependencies. For example, the OrderItem class would depend on the IOrderHeader interface. So, in my unit tests, I can easily mock the behaviour of an OrderHeader with a FakeOrderHeader class that implements the IOrderHeader interface. The problem with this approach is the complexity that the Model classes would end up having. Would you have other ideas on how to break this coupling in the model classes? Or, how to make it easier to unit-test the model classes?

    Read the article

  • Criteria for selecting timeout value?

    - by stijn
    Situation: a piece of software reads frames of data from a file in a seperate thread and puts it on a queue, emptied by another thread. That second thread periodically checks on the queue and fails rather gracefully, by showing an error message stating the read timed out, if no data is available within a certain amount of time. Initially this timeout was set to 200mSec. There was no real reasoning behind that constant though, but it worked fine. We measured on a couple of machines and for large data frames, larger than what would be used by customers, a read took like 20mSec whith no other load on the machine. However one customer now gets timeout errors now and then (on the second try all is fine, probably the file is in cache or the virus scanner leaves it alone). The programmers are like 'well, yeah, but that customer's machine is full of cruft, virus scanners, tons of unneeded background processes etc'. Of course the customer is like 'hey this should just work, shouldn't it'? While the programers have a point, since the software is heavy enough to validate the need for a dedicated machine, that does not make the customer happy. Increasing the timeout to 2 seconds, for example, solves the problem. But I'd like to make a proper decision now instead of just randomly pick some magic constant that is probably ok in 99% of cases. What criteria should be used for that? We could just pick a large number, but that feels wrong. (and then we end up with a program that has the horrible bahaviour of hanging when trying to read from a disconnected drive for instance, whereas we'd rather make it show an error right away). Or we could make the timeout value a user setting, but then we need to ducument it clearly and even then not all customers are tech savy enough to really understand what it does. Or we could try and wait until another customer reports timeouts and increase the value again. And again. Until we find something ok for 99.99% of the cases.. Any good practice for this type of situation?

    Read the article

  • SharePoint Thoughts

    - by Tim Murphy
    I was listening to .NET Rocks episode #713 and it got me thinking about a number of SharePoint related topics. I have been working with SharePoint since the 2001 product came out and have watched it evolve over the years.  Today SharePoint is one of the most powerful and flexible products in the market.  Of course that doesn’t mean there isn’t room for improvement (a lot of improvement in fact) and with much power comes much responsibility. My main gripe these days is that you have to develop on a server instance.  This adds a real barrier to entry for developers.  You either have to run VMWare or Hyper-V on your developer machine or actually develop on your dev server for most tasks.  Yes, there is a way to setup a Windows 7 machine with the SharePoint components but it is very hackish. Beyond that the tools in VS2010 are a great leap forward from past generations.  Not requiring a separate package creation tool is not the least of the improvements.  Better workflow and web part development have also eased the burden of many developers. The other thing the show brought up in my thoughts was more around usage.  Users want to be able to self server everything without regard to what affect that has on leveraging their data from a corporate perspective.  My coworkers who work on Lotus Notes ask why the user can’t just do what ever they want? Part of the reason is that those features have not been built, but the other part is that giving them those features is often like giving an infant a loaded hand gun.  You can do it but it doesn’t make it the smart thing to do. As with any tool that is going to be used in the enterprise it should be subject to governance.  If controls are not in place as they said in the episode of DNR the document libraries and I believe SharePoint in general starts to look as disarrayed and unusable as a shared drive.  Consider these factors before giving into every whim of the users.  You should be able to explain to them the tradeoffs of giving them full control versus being able to leverage the information they collect to the benefit of the organization. These are just a couple of the thoughts that were triggered by the show.  I’m sure there are more discussions that can be had.  Feel free to leave your comments about the pros and cons of SharePoint. del.icio.us Tags: .NET Rocks,SharePoint,software development

    Read the article

  • SQL 2014 does data the way developers want

    - by Rob Farley
    A post I’ve been meaning to write for a while, good that it fits with this month’s T-SQL Tuesday, hosted by Joey D’Antoni (@jdanton) Ever since I got into databases, I’ve been a fan. I studied Pure Maths at university (as well as Computer Science), and am very comfortable with Set Theory, which undergirds relational database concepts. But I’ve also spent a long time as a developer, and appreciate that that databases don’t exactly fit within the stuff I learned in my first year of uni, particularly the “Algorithms and Data Structures” subject, in which we studied concepts like linked lists. Writing in languages like C, we used pointers to quickly move around data, without a database in sight. Of course, if we had a power failure all this data was lost, as it was only persisted in RAM. Perhaps it’s why I’m a fan of database internals, of indexes, latches, execution plans, and so on – the developer in me wants to be reassured that we’re getting to the data as efficiently as possible. Back when SQL Server 2005 was approaching, one of the big stories was around CLR. Many were saying that T-SQL stored procedures would be a thing of the past because we now had CLR, and that obviously going to be much faster than using the abstracted T-SQL. Around the same time, we were seeing technologies like Linq-to-SQL produce poor T-SQL equivalents, and developers had had a gutful. They wanted to move away from T-SQL, having lost trust in it. I was never one of those developers, because I’d looked under the covers and knew that despite being abstracted, T-SQL was still a good way of getting to data. It worked for me, appealing to both my Set Theory side and my Developer side. CLR hasn’t exactly become the default option for stored procedures, although there are plenty of situations where it can be useful for getting faster performance. SQL Server 2014 is different though, through Hekaton – its In-Memory OLTP environment. When you create a table using Hekaton (that is, a memory-optimized one), the table you create is the kind of thing you’d’ve made as a developer. It creates code in C leveraging structs and pointers and arrays, which it compiles into fast code. When you insert data into it, it creates a new instance of a struct in memory, and adds it to an array. When the insert is committed, a small write is made to the transaction to make sure it’s durable, but none of the locking and latching behaviour that typifies transactional systems is needed. Indexes are done using hashes and using bw-trees (which avoid locking through the use of pointers) and by handling each updates as a delete-and-insert. This is data the way that developers do it when they’re coding for performance – the way I was taught at university before I learned about databases. Being done in C, it compiles to very quick code, and although these tables don’t support every feature that regular SQL tables do, this is still an excellent direction that has been taken. @rob_farley

    Read the article

  • Running Built-In Test Simulator with SOA Suite Healthcare 11g in PS4 and PS5

    - by Shub Lahiri, A-Team
    Background SOA Suite for Healthcare Integration pack comes with a pre-installed simulator that can be used as an external endpoint to generate inbound and outbound HL7 traffic on specified MLLP ports. This is a command-line utility that can be very handy when trying to build a complete end-to-end demo within a standalone, closed environment. The ant-based utility accepts the name of a configuration file as the command-line input argument. The format of this configuration file has changed between PS4 and PS5. In PS4, the configuration file was XML based and in PS5, it is name-value property based. The rest of this note highlights these differences and provides samples that can be used to run the first scenario from the product samples set. PS4 - Configuration File The sample configuration file for PS4 is shown below. The configuration file contains information about the following items: Directory for incoming and outgoing files for the host running SOA Suite Healthcare Polling Interval for the directory External Endpoint Logical Names External Endpoint Server Host Name and Ports Message throughput to be simulated for generating outbound messages Documents to be handled by different endpoints A copy of this file can be downloaded from here. PS5 - Configuration File The corresponding sample configuration file for PS5 is shown below. The configuration file contains similar information about the sample scenario but is not in XML format. It has name-value pairs specified in the form of a properties file. This sample file can be downloaded from here. Simulator Configuration Before running the simulator, the environment has to be set by defining the proper ANT_HOME and JAVA_HOME. The following extract is taken from a working sample shell script to set the environment: Also, as a part of setting the environment, template jndi.properties and logging.properties can be generated by using the following ant command: ant -f ant-b2bsimulator-util.xml b2bsimulator-prop Sample jndi.properties and logging.properties are shown below and can be modified, as needed. The jndi.properties contains information about connectivity to the local Weblogic Managed Server instance and the logging.properties file controls the amount of logging that can be generated from the running simulator process. Simulator Usage - Start and Stop The command syntax to launch the simulator via ant is the same in PS4 and PS5. Only the appropriate configuration file has to be supplied as the command-line argument, for example: ant -f ant-b2bsimulator-util.xml b2bsimulatorstart -Dargs="simulator1.hl7-config.xml" This will start the simulator and will keep running to provide an active external endpoint for SOA Healthcare Integration engine. To stop the simulator, a similar ant command can be used, for example: ant -f ant-b2bsimulator-util.xml b2bsimulatorstop

    Read the article

  • Microsoft and Application Architectures

    Microsoft has dealt with several kinds of application architectures to include but not limited to desktop applications, web applications, operating systems, relational database systems, windows services, and web services. Because of the size and market share of Microsoft, virtually every modern language works with or around a Microsoft product. Some of the languages include: Visual Basic, VB.Net, C#, C++, C, ASP.net, ASP, HTML, CSS, JavaScript, Java and XML. From my experience, Microsoft strives to maintain an n-tier application standard where an application is comprised of multiple layers that perform specific functions, for example: presentation layer, business layer, data access layer are three general layers that just about every formally structured application contains. The presentation layer contains anything to do with displaying information to the screen and how it appears on the screen. The business layer is the middle man between the presentation layer and data access layer and transforms data from the data access layer in to useable information to be stored later or sent to an output device through the presentation layer. The data access layer does as its name implies, it allows the business layer to access data from a data source like MS SQL Server, XML, or another data source. One of my favorite technologies that Microsoft has come out with recently is the .Net Framework. This framework allows developers to code an application in multiple languages and compiles them in to one intermediate language called the Common Language Runtime (CLR). This allows VB and C# developers to work seamlessly together as if they were working in the same project. The only real disadvantage to using the .Net Framework is that it only natively runs on Microsoft operating systems. However, Microsoft does control a majority of the operating systems currently installed on modern computers and servers, especially with personal home computers. Given that the Microsoft .Net Framework is so flexible it is an ideal for business to develop applications around it as long as they wanted to commit to using Microsoft technologies and operating systems in the future. I have been a professional developer for about 9+ years now and have seen the .net framework work flawlessly in just about every instance I have used it. In addition, I have used it to develop web applications, mobile phone applications, desktop applications, web service applications, and windows service applications to name a few.

    Read the article

  • How to keep a data structure synchronized over a network?

    - by David Gouveia
    Context In the game I'm working on (a sort of a point and click graphic adventure), pretty much everything that happens in the game world is controlled by an action manager that is structured a bit like: So for instance if the result of examining an object should make the character say hello, walk a bit and then sit down, I simply spawn the following code: var actionGroup = actionManager.CreateGroup(); actionGroup.Add(new TalkAction("Guybrush", "Hello there!"); actionGroup.Add(new WalkAction("Guybrush", new Vector2(300, 300)); actionGroup.Add(new SetAnimationAction("Guybrush", "Sit")); This creates a new action group (an entire line in the image above) and adds it to the manager. All of the groups are executed in parallel, but actions within each group are chained together so that the second one only starts after the first one finishes. When the last action in a group finishes, the group is destroyed. Problem Now I need to replicate this information across a network, so that in a multiplayer session, all players see the same thing. Serializing the individual actions is not the problem. But I'm an absolute beginner when it comes to networking and I have a few questions. I think for the sake of simplicity in this discussion we can abstract the action manager component to being simply: var actionManager = new List<List<string>>(); How should I proceed to keep the contents of the above data structure syncronized between all players? Besides the core question, I'm also having a few other concerns related to it (i.e. all possible implications of the same problem above): If I use a server/client architecture (with one of the players acting as both a server and a client), and one of the clients has spawned a group of actions, should he add them directly to the manager, or only send a request to the server, which in turn will order every client to add that group? What about packet losses and the like? The game is deterministic, but I'm thinking that any discrepancy in the sequence of actions executed in a client could lead to inconsistent states of the world. How do I safeguard against that sort of problem? What if I add too many actions at once, won't that cause problems for the connection? Any way to alleviate that?

    Read the article

  • The Minimalist Approach to Content Governance - Create Phase

    - by Kellsey Ruppel
     Originally posted by John Brunswick. In this installment of our Minimalist Approach to Content Governance we finally get to the fun part of the content creation process! Once the content requester has addressed the items outlined in the Request Phase it is time to setup and begin the production of content.   For this to be done correctly it is important the the content be assigned appropriate workflow and security information. As in our prior phase, let's take a look at what can be done to streamline this process - as contributors are focused on getting information to their end users as quickly as possible. This often means that details around how to ensure that the materials are properly managed can be overlooked, but fortunately there are some techniques that leverage our content management system's native capabilities to automatically take care of some of the details. 1. Determine Access Why - Even if content is not something that needs to restricted due to security reasons, it is helpful to apply access rights so that the content ends up being visible only to users that it relates to. This will greatly improve user experience. For instance, if your team is working on a group project many of your fellow company employees do not need to see the content that is being worked on for that project. How - Make use of native content features that allow propagation of security and meta data from parent folders within your content system that have been setup for your particular effort. This makes it painless to enforce security, as well as meta data policies for even the most unorganized users. The default settings at a parent level can be set once the content creation request has been accepted and a location in the content management system is assigned for your specific project. Impact - Users can find information will less effort, as they will only be exposed to what they need for their work and can leverage advanced search features to take advantage of meta data assigned to content. The combination of default security and meta data will also help in running reports against the content in the Manage and Retire stages that we will discuss in the next 2 posts. 2. Assign Workflow (optional depending on nature of content) Why - Every case for workflow is going to be a bit different, but it generally involves ensuring that content conforms to management, legal and or editorial requirements. How - Oracle's Universal Content Management offers two ways of helping to workflow content without much effort. Workflow can be applied to content based on Criteria acting on meta data or explicitly assigned to content with a Basic workflow. Impact - Any content that needs additional attention before release is addressed, allowing users to comment and version until a suitable result is reached. By using inheritance from parent folders within the content management system content can automatically be given the right security, meta data and workflow information for a particular project's content. This relieves the burden of doing this for every piece of content from management teams and content contributors. We will cover more about the management phase within the content lifecycle in our next installment.

    Read the article

  • Nashorn in the Twitterverse

    - by jlaskey
    I have been following how often Nashorn has been showing up on the net.  Nashorn got a burst of tweets when we announced Project Nashorn and I was curious how Nashorn was trending per day, maybe graph the result.  Counting tweets manually seemed mindless, so why not write a program to do the same. This is where Nashorn + Java came shining through.  There is a very nice Java library out there called Twitter4J https://github.com/yusuke/twitter4j that handles all things Twitter.  After running bin/getAccessToken.sh to get a twitter4j.properties file with personal authorization, all I had to do to run my simple exploratory app was; nashorn -cp $TWITTER4J/twitter4j-core-3.0.1.jar GetHomeTimeline.js The content of GetHomeTimeline.js is as follows; var twitter4j      = Packages.twitter4j; var TwitterFactory = twitter4j.TwitterFactory; var Query          = twitter4j.Query; var twitter = new TwitterFactory().instance; var query   = new Query("nashorn OR nashornjs"); query.count = 100; do {     var result = twitter.search(query);     var tweets = result.tweets;     for each (tweet in tweets) {         print("@" + tweet.user.screenName + "\t" + tweet.text);     } } while (query = result.nextQuery()); How easy was that?  Now to hook it up to the JavaFX graphing library... 

    Read the article

  • How can I be prepared to join a company?

    - by Aerovistae
    There's more to it than that, but this title was the best way I could think of to sum it up. I'm a senior in a good computer science program, and I'm graduating early. About to start interviews and all whatnot. I'm not a super-experienced programmer, not one of those people who started in middle school. I'm decent at this, but I'm not among the best, not nearly. I have to do an awful lot of googling. So today I'm meeting some fellow for lunch at a campus cafe to discuss some front-end details when this tall, good-looking guy begs pardon, says he's new to campus, says he's wondering if we know where he can go to sign up for recruiting developers. Quickly evolves into long conversation: he's the CEO of a seems-to-be-doing-well start-up. Hiring passionate interns and full-times. Sounds great! I take one look at his site on my own computer later, immediately spot a major bug. No idea how to fix it, but I see it. I go over to the page code, and good god. It's the standard amount of code you would expect from a full-scale web application, a couple dozen pages of HTML and scripts. I don't even know where to start reading it. I've built sites from scratch, but obviously never on that scale, nor have I ever worked on one of that scale. I have no idea which bit might generate the bug. But that sets me thinking: How could someone like me possibly settle into an environment like that? A start-up is a very high-pressure working environment. I don't know if I can work at that pace under those constraints-- I would hate to let people down. And with only 10 employees, it's not like anyone has much time to help you get your bearings. Somewhere in there is a question. Can you see it? I'm asking for general advice here. Maybe even anecdotal advice. Is joining a start-up right out of college a scary process? Am I overestimating what it would take to figure out the mass of code behind this site? What's the likelihood a decent but only moderately-experienced coder could earn his pay at such a place? For instance, I know nothing of server-side/back-end programming. Never touched it. That scares me.

    Read the article

  • Using Query Classes With NHibernate

    - by Liam McLennan
    Even when using an ORM, such as NHibernate, the developer still has to decide how to perform queries. The simplest strategy is to get access to an ISession and directly perform a query whenever you need data. The problem is that doing so spreads query logic throughout the entire application – a clear violation of the Single Responsibility Principle. A more advanced strategy is to use Eric Evan’s Repository pattern, thus isolating all query logic within the repository classes. I prefer to use Query Classes. Every query needed by the application is represented by a query class, aka a specification. To perform a query I: Instantiate a new instance of the required query class, providing any data that it needs Pass the instantiated query class to an extension method on NHibernate’s ISession type. To query my database for all people over the age of sixteen looks like this: [Test] public void QueryBySpecification() { var canDriveSpecification = new PeopleOverAgeSpecification(16); var allPeopleOfDrivingAge = session.QueryBySpecification(canDriveSpecification); } To be able to query for people over a certain age I had to create a suitable query class: public class PeopleOverAgeSpecification : Specification<Person> { private readonly int age; public PeopleOverAgeSpecification(int age) { this.age = age; } public override IQueryable<Person> Reduce(IQueryable<Person> collection) { return collection.Where(person => person.Age > age); } public override IQueryable<Person> Sort(IQueryable<Person> collection) { return collection.OrderBy(person => person.Name); } } Finally, the extension method to add QueryBySpecification to ISession: public static class SessionExtensions { public static IEnumerable<T> QueryBySpecification<T>(this ISession session, Specification<T> specification) { return specification.Fetch( specification.Sort( specification.Reduce(session.Query<T>()) ) ); } } The inspiration for this style of data access came from Ayende’s post Do You Need a Framework?. I am sick of working through multiple layers of abstraction that don’t do anything. Have you ever seen code that required a service layer to call a method on a repository, that delegated to a common repository base class that wrapped and ORMs unit of work? I can achieve the same thing with NHibernate’s ISession and a single extension method. If you’re interested you can get the full Query Classes example source from Github.

    Read the article

  • PASS Summit Location follow up - result analysis

    - by simonsabin
    I've had a chance to look at the results directly and it is clear that there is a tough choice. On the one hand people are saying that they prefer to have PASS put their money into chapters and things like 24hrs of PASS rather than an event on the east coast. Whilst at the same time almost 50% more people said they would be more likely to attend an East Coast event than a Seattle event, and 60% more said they would be more likley to attend a US Central region event. Whats more 60% said that the summit should be outside of Seattle every other year with only 19% saying it should always stay in Seattle. So clearly there is a huge desire for a non Seattle event. Looking at the other reasons for keeping in Seattle and the big one being that people want Microsoft speakers. More people think its somewhat important of very important that the conference is in walking distance of the hotels and restaurants. Essentially the Q6 questions show an even balance for normal conference, highlighting that they are prepared to travel, not with the family and they want a well laid out conference. Whats very annoying is that the questions, as people have commented, were biased towards certain answers. For instance there was no option about whether people feel its important to have industry leading speakers, MVPs etc at the conference. Only questions about Microsoft speakers. I know survey writing is very difficult to avoid biasing the answers one way or another. There was also no choice to show peoples preference, would people prefer Microsoft speakers or the summit to be held on the East Coast/Central US. I also find it amazing that people prefer hundres of developers rather than the SQLCAT and CSS teams, surely that indicates another issue about a lack of understanding of what the these teams do. All in all it is clear that people showed they want an event outside of Seattle and don't want PASS to be putting money into that instead of into other community activites. I find it suprising that there appears to have been a huge weighting against certain questions which have prioritised them over the huge desire for a PASS summit outside of Seattle. Lets see where we will be in 2013 or maybe they will rethink 2012 who knows.

    Read the article

  • Image first loaded, then it isn't? (XNA)

    - by M0rgenstern
    I am very confused at the Moment. I have the following Class: (Just a part of the class): public class GUIWindow { #region Static Fields //The standard image for windows. public static IngameImage StandardBackgroundImage; #endregion } IngameImage is just one of my own classes, but actually it contains a Texture2D (and some other things). In another class I load a list of GUIButtons by deserializing a XML file. public static GUI Initializazion(string pXMLPath, ContentManager pConMan) { GUI myGUI = pConMan.Load<GUI>(pXMLPath); GUIWindow.StandardBackgroundImage = new IngameImage(pConMan.Load<Texture2D>(myGUI.WindowStandardBackgroundImagePath), Vector2.Zero, 1024, 600, 1, 0, Color.White, 1.0f, true, false, false); System.Console.WriteLine("Image loaded? " + (GUIWindow.StandardBackgroundImage.ImageStrip != null)); myGUI.Windows = pConMan.Load<List<GUIWindow>>(myGUI.GUIFormatXMLPath); System.Console.WriteLine("Windows loaded"); return myGUI; } Here this line: System.Console.WriteLine("Image loaded? " + (GUIWindow.StandardBackgroundImage.ImageStrip != null)); Prints "true". To load the GUIWindows I need an "empty" constructor, which looks like that: public GUIWindow() { Name = ""; Buttons = new List<Button>(); ImagePath = ""; System.Console.WriteLine("Image loaded? (In win) " + (GUIWindow.StandardBackgroundImage.ImageStrip != null)); //Image = new IngameImage(StandardBackgroundImage); //System.Console.WriteLine( //Image.IsActive = false; SelectedButton = null; IsActive = false; } As you can see, I commented lines out in the constructor. Because: Otherwise this would crash. Here the line System.Console.WriteLine("Image loaded? (In win) " + (GUIWindow.StandardBackgroundImage.ImageStrip != null)); Doesn't print anything, it just crashes with the following errormessage: Building content threw NullReferenceException: Object reference not set to an object instance. Why does this happen? Before the program wants to load the List, it prints "true". But in the constructor, so in the loading of the list it prints "false". Can anybody please tell me why this happens and how to fix it?

    Read the article

  • How to make my Oracle update/insert action through Java faster?

    - by gunbuster363
    I am facing a problem in my company that is - our program's speed is not fast enough. To be more specific, we are telecommunication company and this program handle call/internet serfing transaction made by every mobile phone users in our city. Because the amount of download content made by the iphone users is just too much, our program cannot handle them fast enough. The situation is, the amount of transaction made by users are double of the transaction processed by our program. Most of the running time of the program are dominated by DB transactions. I've search through the internet and browsed some sites ( for example: http://www.javaperformancetuning.com/tips/rawtips.shtml ) talking about Java performace in DB, but I cannot find a suggestion suitable for us. These advices are not applicable/already used, for instance: 1. Use prepared statements. Use parameterized SQL Already used prepared statement. Each time will use different parameter by clear parameters and set parameters. 2. Tune the SQL to minimize the data returned (e.g. not 'SELECT *'). Sure, already used. 3. Use connection pooling. We hold a single connection during the program's execution. And I doubt that pooling cannot solve the problem because our program act as 1 user, so there are no problem for concurrent access to DB. If anyone of you think pooling is good, please tell me why. Thanks. 4. Try to combine queries and batch updates. Cannot do it. Every query/insert/update is depend on the database's information. For example, we look up the DB for the client's information, if we cannot find his usage, we insert the usage into DB, otherwise we do update. 5. Close resources (Connections, Statements, ResultSets) when finished Sure. 6. Select the fastest JDBC driver. I don't know. I've search on the internet about the type of driver available and I am very confused. We use oracle.jdbc.driver.OracleDriver and we use thin instead of oci, that's all I know. In addition, our program is a two-tier way ( java <- oracle ) 7. Turn off auto-commit already done that. Looking forwards to any help.

    Read the article

  • Lenovo Thinkpad X1 Carbon support

    - by Robottinosino
    I am considering selling my Mac to get money towards a Lenovo Thinkpad X1 because what I really want to do is to be running an Ubuntu system all the time. Is this machine completely supported in Ubuntu, with no tiny little feature missing just because I am "going Linux"? Optional user story section, skip to the question below if you don't have time: I have a friend who bought a "works on Ubuntu" system a year ago and has hated the fact ever since: battery lasts less than if he boots in Windows (which he despises) and he ascribes that to "no good OS/harware integration and support for advanced chipset power management features", odd behaviour on suspend/resume/hibernate (says: "when it works 90% of the time and the other 10% it makes you lose your work is as good as broken - 90% is the same as 0% he says), some occasional graphics card glitches he can perfectly well live with and has almost grown affectionate to, and finally, and that is what would make him undo his choice if he could, bad "input device drivers". He says: trackpoint and trackpad just "feel different", "so much better" on Windows and that was impossible to know from the website brochure. That story makes me very doubtful... but I want to abandon this "walled garden" of prison that is my Mac and go Ubuntu all the way, no doubt about that! My dilemma at this time is just: "I don't want to live with those eternal frustrations for sure"! Here's a directly answerable phrasing of my question: Is the Lenovo Thinkpad X1 supported on Ubuntu? Yes/no, which version? Which hardware features are not supported? Provide a list Optionally: sort the list in descending order of frustration from your experience Optionally: mention if there are acceptable workarounds to the "out-of-the-box" condition described in the earlier points and whether this ameliorates frustration at least to "tolerable" levels Comment: the Ubuntu hardware certification page is so not-for-end-users it's unreal. Whoa. What would make it end-user friendly is: Link to "buy here and you'll be just fine, this is the right configuration for you, it'll work as long as you press BUY on that page and don't browse further" Remove mentions of may and might not work. Just tell it straight: press buy here and you will get a working system with the exception of A, B, C (so that I can decide whether the philosophical "freedom pleasure" I get from escaping an Apple world is enough to off-balance the loss, for instance, of Bluetooth capabilities (something that I of course use on my Mac) but "could" lose to use free (as in freedom) software The certification page fails to dispel doubts in me as an end-user. I don't feel "eased into Ubuntu", I feel "partially informed".

    Read the article

  • How to display image in second layer in Cocos2d

    - by PeterK
    I am very new at Cocos2d and is testing to displaying an image over the "Hello World" text on a second layer and need help to get it work. I guess it is some basic stuff here and appreciate any tips etc. with this. I know that if i put the display-code (myLayer1) in the "init" it work or do the call [self goHere] from the "init" in myLayer1 it works but i want to call the "goHere" directly. I have the following code: HelloWorld.m: #import "HelloWorldLayer.h" #import "myLayer1.h" // HelloWorldLayer implementation @implementation HelloWorldLayer +(CCScene *) scene { // 'scene' is an autorelease object. CCScene *scene = [CCScene node]; // 'layer' is an autorelease object. HelloWorldLayer *layer = [HelloWorldLayer node]; myLayer1 *layer1 = [myLayer1 node]; // add layer as a child to scene [scene addChild: layer]; [scene addChild: layer1]; // return the scene return scene; } // on "init" you need to initialize your instance -(id) init { // always call "super" init // Apple recommends to re-assign "self" with the "super" return value if( (self=[super init])) { // create and initialize a Label CCLabelTTF *label = [CCLabelTTF labelWithString:@"Hello World" fontName:@"Marker Felt" fontSize:64]; // ask director the the window size CGSize size = [[CCDirector sharedDirector] winSize]; // position the label on the center of the screen label.position = ccp( size.width /2 , size.height/2 ); // add the label as a child to this Layer [self addChild: label]; myLayer1 *a1 = [myLayer1 new]; [a1 goHere]; [myLayer1 release]; } return self; } myLayer1.m: #import "myLayer1.h" @implementation myLayer1 -(void)goHere { NSLog(@">>>>goHere<<<<"); CGSize size = [[CCDirector sharedDirector] winSize]; CCSprite *vv = [CCSprite spriteWithFile:@"hand.png"]; vv.position = ccp( size.width /2 , size.height/2 ); [self addChild:vv z:3]; } -(id) init { // always call "super" init // Apple recommends to re-assign "self" with the "super" return value if( (self=[super init])) { } return self; } @end

    Read the article

  • 5.1 surround sound

    - by rocker9455
    Ok, So i've always had trouble with enabling 5.1 in ubuntu. Running 'alsamixer': I have: Master, Heaphones, PCM, Front, Front Mi, Front Mi, Surround, Center All are at 100% Card:HDA Intel Chip:Realtek ALC888 (This is my onboard sound, Its a dell studio, with 7.1 integrated sound) Running "speaker-test -c6 -twav" I only get the front 2 speakers (Right/Left) making any noise. The others make no noise at all. I have no other sound card to use as all my PCI slots are used up. Daemon.conf: ; daemonize = no ; fail = yes ; allow-module-loading = yes ; allow-exit = yes ; use-pid-file = yes ; system-instance = no ; enable-shm = yes ; shm-size-bytes = 0 # setting this 0 will use the system-default, usually 64 MiB ; lock-memory = no ; cpu-limit = no ; high-priority = yes ; nice-level = -11 ; realtime-scheduling = yes ; realtime-priority = 5 ; exit-idle-time = 20 ; scache-idle-time = 20 ; dl-search-path = (depends on architecture) ; load-default-script-file = yes ; default-script-file = ; log-target = auto ; log-level = notice ; log-meta = no ; log-time = no ; log-backtrace = 0 resample-method = speex-float-1 ; enable-remixing = yes ; enable-lfe-remixing = no flat-volumes = no ; rlimit-fsize = -1 ; rlimit-data = -1 ; rlimit-stack = -1 ; rlimit-core = -1 ; rlimit-as = -1 ; rlimit-rss = -1 ; rlimit-nproc = -1 ; rlimit-nofile = 256 ; rlimit-memlock = -1 ; rlimit-locks = -1 ; rlimit-sigpending = -1 ; rlimit-msgqueue = -1 ; rlimit-nice = 31 ; rlimit-rtprio = 9 ; rlimit-rttime = 1000000 ; default-sample-format = s16le ; default-sample-rate = 44100 ; default-sample-channels = 6 ; default-channel-map = front-left,front-right default-fragments = 8 default-fragment-size-msec = 10

    Read the article

  • [JAVA]How to make my Oracle update/insert action through JAVA faster?

    - by gunbuster363
    [JAVA]How to make my Oracle update/insert action through JAVA faster? Hi everyone, I am facing a problem in my company that is - our program's speed is not fast enough. To be more specific, we are telecommunication company and this program handle call/internet serfing transaction made by every mobile phone users in our city. Because the amount of download content made by the iphone users is just too much, our program cannot handle them fast enough. The situation is, the amount of transaction made by users are double of the transaction processed by our program. Most of the running time of the program are dominated by DB transactions. I've search through the internet and browsed some sites ( for example: http://www.javaperformancetuning.com/tips/rawtips.shtml ) talking about java performace in DB, but I cannot find a suggestion suitable for us. these advices are not applicable/already used, for instance: 1)Use prepared statements. Use parametrized SQL Already used prepared statement. Each time will use different parameter by clear parameters and set parameters. 2)Tune the SQL to minimize the data returned (e.g. not 'SELECT *'). Sure, already used. 3)Use connection pooling. We hold a single connection during the program's execution. And I doubt that pooling cannot solve the problem because our program act as 1 user, so there are no problem for concurrent access to DB. If anyone of you think pooling is good, please tell me why. Thanks. 4)Try to combine queries and batch updates. Cannot do it. Every query/insert/update is depend on the database's information. For example, we look up the DB for the client's information, if we cannot find his usage, we insert the usage into DB, otherwise we do update. 5)Close resources (Connections, Statements, ResultSets) when finished Sure. 6)Select the fastest JDBC driver. I don't know. I've search on the internet about the type of driver available and I am very confused. We use oracle.jdbc.driver.OracleDriver and we use thin instead of oci, that's all I know. In addition, our program is a two-tier way ( java <- oracle ) 7)turn off auto-commit already done that. Looking forwards to any helps, thank you very much.

    Read the article

  • Defining a service layer: the text-based adventure

    - by Stacy Vicknair
    Applications these days have more options than ever for a user interface, and it’s only going to grow. A successful product might require native applications for mobile devices, a regular web implementation, or even a gaming console. These systems often will be centralized and data driven. The solution is one that’s fairly solitary, a service layer! Simply put, take what’s shared and put it behind a physical or abstract layer that defines the boundary between the specific user interface and the shared content.   I know, I know, none of this is complicated. But some times it can be difficult to discern what belongs on which side of the line. For instance, say we’re creating a service that will provide content for both an ASP.NET MVC application and a WP7 application. Although the content served to each application is the same, there are different paradigms and patterns for displaying that data in the different environments. In ASP.NET MVC, you may create a model specific to a page that combines necessary information. In the WP7 application you might require different sets of data that you will connect via MVVM with the view. The general rule of thumb is that any shared content, business rules, or data should exist separately. Any element that is specific to the current UI implementation should be included in a separate library or with the UI implementation itself. The WP7 application doesn’t need my MVC specific model classes. My MVC application doesn’t require those INotifyPropertyChanged viewmodels that the WP7 application depends on. In both cases, there should be additional processing done above the service layer to massage the data to the application’s specific needs.   Service-ocalypse: the text based adventure What helps me the most about deciding whether or not something belongs coupled to the UI implementation or in the shared implementation is thinking of the simplest implementation you could have: a console application. You might have played a game like Peasant’s Quest: The console app is the text based adventure game version of your application. If you’re service was consumed in its simplest form, you would simply have a console based API for it that issues requests. Maybe those requests aren’t SWIM TO BOAT, but they might be CREATE USER JOHN. If I issue a request, I expect that request to be issued to the service. If the service has any exceptions or issues with my input, that business logic should be encapsulated in that service, not implemented in the UI. The service layer should be your functional application in its entirety, and anything above that layer should only assist with the display of that information.

    Read the article

  • ClearTrace Performance on 170GB of Trace Files

    - by Bill Graziano
    I’ve always worked to make ClearTrace perform well.  That’s probably because I spend so much time watching it work.  I’m often going through two or three gigabytes of trace files but I rarely get the chance to run it on a really large set of files. One of my clients wanted to run a full trace for a week and then analyze the results.  At the end of that week we had 847 200MB trace files for a total of nearly 170GB. I regularly use 200MB trace files when I monitor production systems.  I usually get around 300,000 statements in a file that size if it’s mostly stored procedures.  So those 847 trace files contained roughly 250 million statements.  (That’s 730 bytes per statement if you’re keeping track.  Newer trace files have some compression in them but I’m not exactly sure what they’re doing.)  On a system running 1,000 statements per second I get a new file every five minutes or so. It took 27 hours to process these files on an older development box.  That works out to 1.77MB/second.  That means ClearTrace processed about 2,654 statements per second. You can query the data while you’re loading it but I’ve found it works better to use a second instance of ClearTrace to do this.  I’m not sure why yet but I think there’s still some dependency between the two processes. ClearTrace is almost always CPU bound.  It’s really just a huge, ugly collection of regular expressions.  It only writes a summary to its database at the end of each trace file so that usually isn’t a bottleneck.  At the end of this process, the executable was using roughly 435MB of RAM.  Certainly more than when it started but I think that’s acceptable. The database where all this is stored started out at 100MB.  After processing 170GB of trace files the database had grown to 203MB.  The space savings are due to the “datawarehouse-ish” design and only storing a summary of each trace file. You can download ClearTrace for SQL Server 2008 or test out the beta version for SQL Server 2012.  Happy Tuning!

    Read the article

  • MySQL Enterprise Monitor 3.0.11 has been released

    - by Andy Bang
    We are pleased to announce that MySQL Enterprise Monitor 3.0.11 is now available for download on the My Oracle Support (MOS) web site. It will also be available via the Oracle Software Delivery Cloud in about 1 week. This is a maintenance release that includes a few new features and fixes a number of bugs. You can find more information on the contents of this release in the change log. You will find binaries for the new release on My Oracle Support. Choose the "Patches & Updates" tab, and then choose the "Product or Family (Advanced Search)" side tab in the "Patch Search" portlet. You will also find the binaries on the Oracle Software Delivery Cloud in approximately 1 week. Choose "MySQL Database" as the Product Pack and you will find the Enterprise Monitor along with other MySQL products. Based on feedback from our customers, MySQL Enterprise Monitor (MEM) 3.0 offers many significant improvements over previous releases. Highlights include: Policy-based automatic scheduling of rules and event handling (including email notifications) make administration of scale-out easier and automatic Enhancements such as automatic discovery of MySQL instances, centralized agent configuration and multi-instance monitoring further improve ease of configuration and management The new cloud and virtualization-friendly, "agent-less" design allows remote monitoring of MySQL databases without the need for any remote agents Trends, projections and forecasting - Graphs and Event handlers inform you in advance of impending file system capacity problems Zero Configuration Query Analyzer - Works "out of the box" with MySQL 5.6 Performance_Schema (supported by 5.6.14 or later) False positives from flapping or spikes are avoided using exponential moving averages and other statistical techniques Advisors can analyze data across an entire group; for example, the Replication Configuration Advisor can scan an entire topology to find common configuration errors like duplicate server UUIDs or a slave whose version is less than its master's More information on the contents of this release is available here: What's new in MySQL Enterprise Monitor 3.0? MySQL Enterprise Edition: Demos MySQL Enterprise Monitor Frequently Asked Questions MySQL Enterprise Monitor Change History More information on MySQL Enterprise and the Enterprise Monitor can be found here: http://www.mysql.com/products/enterprise/ http://www.mysql.com/products/enterprise/monitor.html http://www.mysql.com/products/enterprise/query.html http://forums.mysql.com/list.php?142 If you are not a MySQL Enterprise customer and want to try the Monitor and Query Analyzer using our 30-day free customer trial, go to http://www.mysql.com/trials, or contact Sales at http://www.mysql.com/about/contact. If you haven't looked at MEM recently, and especially MEM 3.0, please do so now and let us know what you think. Thanks and Happy Monitoring! - The MySQL Enterprise Tools Development Team

    Read the article

< Previous Page | 426 427 428 429 430 431 432 433 434 435 436 437  | Next Page >