Search Results

Search found 25579 results on 1024 pages for 'complex event processing'.

Page 526/1024 | < Previous Page | 522 523 524 525 526 527 528 529 530 531 532 533  | Next Page >

  • How to explain a layperson why a developer should not be interrupted while neck-deep in coding?

    - by András Szepesházi
    If you just consider the second part of my question, "Why a developer should not be interrupted while neck-deep in coding", that has been discussed a number of times by smart people. Heck, even the co-founder of SO, Joel Spolsky, wrote a blog post about "getting in the zone" and "being knocked out of the zone" and why it takes an average of 15 minutes to achieve productivity when participating in complex, software development related tasks. So I think the why has been established. What I'm interested in is how to explain all that to somebody who doesn't know beans about Beans (khmm I mean software development). How to tell the wife, or the funny guy from accounting at the workplace, or the long time friend who pings you on Skype every 30 minutes with a "Wazzzzzzup?!", that all the interruptions have a much deeper impact on your work than the obvious 30 seconds they took from your time. Obviously you can't explain it by sentences like "I have to juggle a lot of variable names in my short term memory" unless you want to be the target of blank stares or friendly abuse. I'd like to be able to explain all that to non-developers in a way that will make them clearly understand - without being offensive, elitist or too technical. EDIT: Thanks to everyone for their great insights. I've accepted EpsilonVector's answer as his analogy was the closest one to my original needs. The "falling asleep" explanation is neither offensive nor technical, almost anyone can relate to it, and the consequences of getting disturbed while falling asleep or while being in the zone are very similar: you experience frustration and you "lose" 15-20 minutes of time.

    Read the article

  • BizTalk 2009 - Scoped Record Counting in Maps

    - by StuartBrierley
    Within BizTalk there is a functoid called Record Count that will return the number of instances of a repeated record or repeated element that occur in a message instance. The input to this functoid is the record or element to be counted. As an example take the following Source schema, where the Source message has a repeated record called Box and each Box has a repeated element called Item: An instance of this Source schema may look as follows; 2 box records - one with 2 items and one with only 1 item. Our destination schema has a number of elements and a repeated box record.  The top level elements contain totals for the number of boxes and the overall number of items.  Each box record contains a single element representing the number of items in that box. Using the Record Count functoid it is easy to map the top level elements, producing the expected totals of 2 boxes and 3 items: We now need to map the total number of items per box, but how will we do this?  We have already seen that the record count functoid returns the total number of instances for the entire message, and unfortunately it does not allow you to specify a scoping parameter.  In order to acheive Scoped Record Counting we will need to make use of a combination of functoids. As you can see above, by linking to a Logical Existence functoid from the record/element to be counted we can then feed the output into a Value Mapping functoid.  Set the other Value Mapping parameter to "1" and link the output to a Cumulative Sum functoid. Set the other Cumulative Sum functoid parameter to "1" to limit the scope of the Cumulative Sum. This gives us the expected results of Items per Box of 2 and 1 respectively. I ran into this issue with a larger schema on a more complex map, but the eventual solution is still the same.  Hopefully this simplified example will act as a good reminder to me and save someone out there a few minutes of brain scratching.

    Read the article

  • Nokia Lumia 920 Windows Phone 8 Announcement

    - by Tim Murphy
    Today Nokia and Microsoft had an event to officially introduce the Lumia 920.  Below is a rundown of some of the things I found interesting. As a person who likes photography there was a lot to drool over.  The main feature that caught my attention was PureView with its optical stabilization.  This alone should improve the majority of you pictures.  Add to that the SmartShoot Object remover that uses multiple images to remove unwanted people or objects that move through your picture and you never have to accept reality again. For the most part the lenses concept introduced in Windows Phone 8 just makes the usability of leveraging camera better.  Of course that is Microsoft’s selling point.  One lens that caught my attention was the Bing lens.  I have to say it is about time that we can take pictures and use them to search for answers using Bing. There were a couple of features shown that involved augmented reality.  One was similar to the yapf application that is already in the market which overlays restaurants and other destination over live camera views.  The other was using the navigation directions with a live view. Then you get down to some of the physical features of the Lumia 920.  The one that got the most stage time is that it has a great 2000mah battery which can be charged wirelessly.  They also pointed out the improved glare reduction of the 4.5 in. curved glass screen.  This hardware improvement is improved further with software that detects glare conditions and adjusts the display attributes to enhance viewing ease. Adding to the wireless cool factor of the Lumia 920 is the general NFC capabilities.  This was demonstrated with NFC docking stations as well as JBL speakers and headphones. There was one more hardware feature that I applauded.  The super sensitive touch screen did away with one of my pet peeves with capacitive touch screens.  You will never have to remove you gloves to operate your phone again.  The mittens that they did the demo with looked more like boxing gloves. I was disappointed with Joe Belfiore said that they were only going to show a couple of new features of the Windows Phone 8 and would hear more at future events.  One of the things he did show is the ability to customize which buttons you preferred as defaults in IE10.  For example you could have the folders button where the refresh button normally is.  He also showed that at long last you can natively take screenshots on your phone.  Hopefully he will be back quickly to give us the rest of the features. The most disappointing part of the event was that we never found out when they would be released or how much they would cost.  Let’s hope this comes soon.  Even with these couple of items still left on my wish list I can’t wait to get my hands on a Lumia 920.  del.icio.us Tags: Windows Phone,Windows Phone 8,Nokia,Lumia,Lumia 920,Microsoft

    Read the article

  • Using the Java SE 8 Date Time API with JPA 2.1

    - by reza_rahman
    Most of you are hopefully aware of the new Date Time API included in Java SE 8. If you are not, you should check them out right now using the Java Tutorial Trail dedicated to the topic. It is a significantly leap forward in processing temporal data in Java. For those who already use Joda-Time the changes will look very familiar - very simplistically speaking the Java SE 8 feature is basically Joda-Time standardized. Quite naturally you will likely want to use the new Date Time APIs in your JPA domain model to better represent temporal data. The problem is that JPA 2.1 will not support the new API out of the box. So what are you to do? Fortunately you can make use of fairly simple JPA 2.1 Type Converters to use the Date Time API in your JPA domain classes. Steven Gertiser shows you how to do it in an extremely well written blog entry. Besides explaining the problem and the solution the entry is actually very good for getting a better understanding of JPA 2.1 Type Converters as well. I think such a set of converters may be a good fit for Apache DeltaSpike as a Java EE 7 extension? In case you are wondering about Java SE 8 support in the JPA specification itself, Nick Williams has already entered an excellent, well researched JIRA entry asking for such support in a future version of the JPA specification that's well worth looking at. Another possibility of course is for JPA providers to start supporting the Date Time API natively before anything is formalized in the specification. What do you think?

    Read the article

  • Suggestions required to build an ECommerce Platform

    - by Haris
    For a prospective client we have to offer a solution to provide following system: CMS Order Management Shopping Cart CRM Helpdesk Accounting & Finance Custom Functions In order to save time and to avoid reinvent the wheel our idea is to integrate different off-the-shelf solutions. Their first requirement is that the system has to be hosted in their country which I think will exclude application like Aplicor, Netsuite & Salesforce. Basically the nucleaus would be the CMS which would integrate all the other apps. PHP or .Net based solutions would be our preferences as have inhouse expertise. So far following are few combinations I have come up with: Joomla (CMS) + Virtuemart (Cart+Ordering) + Sugar CRM + Open ERP (finance) + OTRS Magento (CMS+Cart+Ordering) + Sugar CRM + Open ERP (finance) + Helpdesk Ultimate Drupal (CMS) + Ubercart (Cart+Ordering) + Sugar CRM + Open ERP (finance) + Support Ticketing System Sharepoint (CMS) + OptimusBt (Cart+Ordering) + Dynamics CRM + Great Plains + SharepointHQ Dotnetnuke (CMS) + DNNSpot (Cart+Ordering) + Sigma Pro (CRM+Helpdesk) + Open ERP For Helpdesk I liked Zendesk but the server location was the stopping factor, similar for finance and CRM I liked Aplicor. I would not like to go into detailed requirements as it would make things very complex. Could you please suggest me which options are worth enough to start looking into? What other options we have?

    Read the article

  • MapReduce

    - by kaleidoscope
    MapReduce is a programming model and an associated implementation for processing and generating large data sets. Users specify a map function that processes a key/value pair to generate a set of  intermediate key/value pairs, and a reduce function that merges all intermediate values associated with the same intermediate key. Many real world tasks are expressible in this model, as shown in the paper. Programs written in this functional style are automatically parallelized and executed on a large cluster of commodity machines. The run-time system takes care of the details of partitioning the input data,  scheduling the program's execution across a set of machines, handling machine failures, and managing the required inter-machine communication. This allows programmers without any experience with parallel and distributed systems to easily utilize the resources of a large distributed system. Example: A process to count the appearances of each different word in a set of documents void map(String name, String document):   // name: document name   // document: document contents   for each word w in document:     EmitIntermediate(w, 1); void reduce(String word, Iterator partialCounts):   // word: a word   // partialCounts: a list of aggregated partial counts   int result = 0;   for each pc in partialCounts:     result += ParseInt(pc);   Emit(result); Here, each document is split in words, and each word is counted initially with a "1" value by the Map function, using the word as the result key. The framework puts together all the pairs with the same key and feeds them to the same call to Reduce, thus this function just needs to sum all of its input values to find the total appearances of that word.   Sarang, K

    Read the article

  • Best Method of function parameter validation

    - by Aglystas
    I've been dabbling with the idea of creating my own CMS for the experience and because it would be fun to run my website off my own code base. One of the decisions I keep coming back to is how best to validate incoming parameters for functions. This is mostly in reference to simple data types since object validation would be quite a bit more complex. At first I debated creating a naming convention that would contain information about what the parameters should be, (int, string, bool, etc) then I also figured I could create options to validate against. But then in every function I still need to run some sort of parameter validation that parses the parameter name to determine what the value can be then validate against it, granted this would be handled by passing the list of parameters to function but that still needs to happen and one of my goals is to remove the parameter validation from the function itself so that you can only have the actual function code that accomplishes the intended task without the additional code for validation. Is there any good way of handling this, or is it so low level that typically parameter validation is just done at the start of the function call anyway, so I should stick with doing that.

    Read the article

  • Personal Financial Management – The need for resuscitation

    - by Salil Ravindran
    Until a year or so ago, PFM (Personal Financial Management) was the blue eyed boy of every channel banking head. In an age when bank account portability is still fiction, PFM was expected to incentivise customers to switch banks. It still is, in some emerging economies, but if the state of PFM in matured markets is anything to go by, it is in a state of coma and badly requires resuscitation. Studies conducted around the year show an alarming decline and stagnation in PFM usage in mature markets. A Sept 2012 report by Aite Group – Strategies for PFM Success shows that 72% of users hadn’t used PFM and worse, 58% of them were not kicked about using it. Of the rest who had used it, only half did on a bank site. While there are multiple reasons for this lack of adoption, some are glaringly obvious. While pretty graphs and pie charts are important to provide a visual representation of my income and expense, it is simply not enough to encourage me to return. Static representation of data without any insightful analysis does not help me. Budgeting and Cash Flow is important but when I have an operative account, a couple of savings accounts, a mortgage loan and a couple of credit cards help me with what my affordability is in specific contexts rather than telling me I just busted my budget. Help me with relative importance of each budget category so that I know it is fine to go over budget on books for my daughter as against going over budget on eating out. Budget over runs and spend analysis are post facto and I am informed of my sins only when I return to online banking. That too, only if I decide to come to the PFM area. Fundamentally, PFM should be a part of my banking engagement rather than an analysis tool. It should be contextual so that I can make insight based decisions. So what can be done to resuscitate PFM? Amalgamation with banking activities – In most cases, PFM tools are integrated into online banking pages and they are like chapter 37 of a long story. PFM needs to be a way of banking rather than a tool. Available balances should shift to Spendable Balances. Budget and goal related insights should be integrated with transaction sessions to drive pre-event financial decisions. Personal Financial Guidance - Banks need to think ground level and see if their PFM offering is really helping customers achieve self actualisation. Banks need to recognise that most customers out there are non-proficient about making the best value of their money. Customers return when they know that they are being guided rather than being just informed on their finance. Integrating contextual financial offers and financial planning into PFM is one way ahead. Yet another way is to help customers tag unwanted spending thereby encouraging sound savings habits. Mobile PFM – Most banks have left all those numbers on online banking. With access mostly having moved to devices and the success of apps, moving PFM on to devices will give it a much needed shot in the arm. This is not only about presenting the same wine in a new bottle but also about leveraging the power of the device in pushing real time notifications to make pre-purchase decisions. The pursuit should be to analyse spend, budgets and financial goals real time and push them pre-event on to the device. So next time, I should know that I have over run my eating out budget before walking into that burger joint and not after. Increase participation and collaboration – Peer group experiences and comments are valued above those offered by the bank. Integrating social media into PFM engagement will let customers share and solicit their financial management experiences with their peer group. Peer comparisons help benchmark one’s savings and spending habits with those of the peer group and increases stickiness. While mature markets have gone through this learning in some way over the last one year, banks in maturing digital banking economies increasingly seem to be falling into this trap. Best practices lie in profiling and segmenting customers, being where they are and contextually guiding them to identify and achieve their financial goals. Banks could look at the likes of Simple and Movenbank to draw inpiration from.

    Read the article

  • Should a project start with the client or the server?

    - by MadBurn
    Pretty simple question with a complex answer. Should a project start with the client or the server, and why? Where should a single programmer start a client/server project? What are the best practices and what are the reasons behind them? If you can't think of any, what reasons do you use to justify why you would choose to start one before the other? Personally, I'm asking this question because I'm finishing up specs for a project I will be doing for myself on the side for fun. But now that I'm finishing this phase, I'm wondering "ok, now where do I begin?" Since I've never done a project like this by myself, I'm not sure where I should start. In this project, my server will be doing all the heavy lifting and the client will just be sending updates, getting information from the server, and displaying it. But, I don't want that to sway the answer as I'm looking for more of an in depth and less specific answer that would apply to any project I begin in the future.

    Read the article

  • Finding header files

    - by rwallace
    A C or C++ compiler looks for header files using a strict set of rules: relative to the directory of the including file (if "" was used), then along the specified and default include paths, fail if still not found. An ancillary tool such as a code analyzer (which I'm currently working on) has different requirements: it may for a number of reasons not have the benefit of the setup performed by a complex build process, and have to make the best of what it is given. In other words, it may find a header file not present in the include paths it knows, and have to take its best shot at finding the file itself. I'm currently thinking of using the following algorithm: Start in the directory of the including file. Is the header file found in the current directory or any subdirectory thereof? If so, done. If we are at the root directory, the file doesn't seem to be present on this machine, so skip it. Otherwise move to the parent of the current directory and go to step 2. Is this the best algorithm to use? In particular, does anyone know of any case where a different algorithm would work better?

    Read the article

  • New CAM Editor v2.3 with Open-XDX for Open Data APIs

    - by drrwebber
    Creating actual working XML exchanges, loading data from data stores, generating XML, testing, integrating with web services and then deployment delivery takes a lot of coding and effort. Then writing the documentation, models, schema and doing naming and design rule (NDR) checks and packaging all this together (such as for NIEM IEPD use). What if there was a tool that helped you do all that easily and simply? Welcome to the new Open-XDX and the CAM Editor! Open-XDX uses code-free techniques in combination with CAM templates and visual drag and drop to rapidly design your XML exchange. Then Open-XDX will automatically generate all the SQL for you, read the database data, generate and populate the valid output XML, and filter with parameters. To complete the processing solution Open-XDX works with web services and JDBC database connections as a callable module that can be deployed plug and play with your middleware stack, all with just a few lines of Java code (about 5 actually). You can build either Query/Response or Publish/Subscribe services from existing data stores to XML literally in minutes. To see a demonstration of using Open-XDX, a MySQL data store and integrating with Oracle Web Logic server please see this short few minutes video - http://youtube.com/user/TheCameditor There is also a Quick Guide available that provides more technical insights along with a sample pack download of templates and SQL that you can try for yourself. Head on over to our project resource site to learn more, download the latest CAM Editor and see links to all the resources and materials. We look forward to seeing how the developer community is able to jump start information sharing initiatives using this new innovative approach.

    Read the article

  • JOGL2 test compiles, but doesn't execute - help?

    - by Chuchinyi
    I have a problem with JOGL2. My JOGL2Template.java compiles fine, but executing it results in the following error: D:\java\java\jogl>javac JOGL2Template.java <== compile ok D:\java\java\jogl>java JOGL2Template <== execute error Exception in thread "main" java.lang.ExceptionInInitializerError at javax.media.opengl.GLProfile.<clinit>(GLProfile.java:1176) at JOGL2Template.<init>(JOGL2Template.java:24) at JOGL2Template.main(JOGL2Template.java:57) Caused by: java.lang.SecurityException: no certificate for gluegen-rt.dll in D:\ java\lib\gluegen-rt-natives-windows-i586.jar at com.jogamp.common.util.JarUtil.validateCertificate(JarUtil.java:350) at com.jogamp.common.util.JarUtil.validateCertificates(JarUtil.java:324) at com.jogamp.common.util.cache.TempJarCache.validateCertificates(TempJa rCache.java:328) at com.jogamp.common.util.cache.TempJarCache.bootstrapNativeLib(TempJarC ache.java:283) at com.jogamp.common.os.Platform$3.run(Platform.java:308) at java.security.AccessController.doPrivileged(Native Method) at com.jogamp.common.os.Platform.loadGlueGenRTImpl(Platform.java:298) at com.jogamp.common.os.Platform.<clinit>(Platform.java:207) ... 3 more Here is the JOGL2Template.java source code: import java.awt.Dimension; import java.awt.Frame; import java.awt.event.WindowAdapter; import java.awt.event.WindowEvent; import javax.media.opengl.GLAutoDrawable; import javax.media.opengl.GLCapabilities; import javax.media.opengl.GLEventListener; import javax.media.opengl.GLProfile; import javax.media.opengl.awt.GLCanvas; import com.jogamp.opengl.util.FPSAnimator; import javax.swing.JFrame; /* * JOGL 2.0 Program Template For AWT applications */ public class JOGL2Template extends JFrame implements GLEventListener { private static final int CANVAS_WIDTH = 640; // Width of the drawable private static final int CANVAS_HEIGHT = 480; // Height of the drawable private static final int FPS = 60; // Animator's target frames per second // Constructor to create profile, caps, drawable, animator, and initialize Frame public JOGL2Template() { // Get the default OpenGL profile that best reflect your running platform. GLProfile glp = GLProfile.getDefault(); // Specifies a set of OpenGL capabilities, based on your profile. GLCapabilities caps = new GLCapabilities(glp); // Allocate a GLDrawable, based on your OpenGL capabilities. GLCanvas canvas = new GLCanvas(caps); canvas.setPreferredSize(new Dimension(CANVAS_WIDTH, CANVAS_HEIGHT)); canvas.addGLEventListener(this); // Create a animator that drives canvas' display() at 60 fps. final FPSAnimator animator = new FPSAnimator(canvas, FPS); addWindowListener(new WindowAdapter() { // For the close button @Override public void windowClosing(WindowEvent e) { // Use a dedicate thread to run the stop() to ensure that the // animator stops before program exits. new Thread() { @Override public void run() { animator.stop(); System.exit(0); } }.start(); } }); add(canvas); pack(); setTitle("OpenGL 2 Test"); setVisible(true); animator.start(); // Start the animator } public static void main(String[] args) { new JOGL2Template(); } @Override public void init(GLAutoDrawable drawable) { // Your OpenGL codes to perform one-time initialization tasks // such as setting up of lights and display lists. } @Override public void display(GLAutoDrawable drawable) { // Your OpenGL graphic rendering codes for each refresh. } @Override public void reshape(GLAutoDrawable drawable, int x, int y, int w, int h) { // Your OpenGL codes to set up the view port, projection mode and view volume. } @Override public void dispose(GLAutoDrawable drawable) { // Hardly used. } } Any ideas what might be the cause of these errors?

    Read the article

  • Building a template engine - starting point

    - by Anirudh
    We're building a Django-based project with a template component. This component will be separate from the project as such and can be Django/Python, Node, Java or whatever works. The template has to be rendered into HTML. The templates will contain references to objects with properties that are defined in the DB, say, a Bus. For eg, it could be something like [object type="vehicle" weight="heavy"] and it would have to pull a random object from the DB fulfilling the criteria : type="vehicle" weight="heavy" (bus/truck/jet) and then substitute that tag with an image, say, of a Bus. Also it would have to be able to handle some processing. Eg: What is [X type="integer" lte="10"] + [Y type="integer" lte="10"] [option X+Y correct_ans="true"] [option X-Y correct_ans="false"] [option X+y+1 correct_ans="false"] The engine would be expected to fill in a random integer value <= 10 for X and Y and show radioboxes for each of the options. Would also have to store the fact that the first option is the correct answer. Does it to make sense to write something from the scratch? Or is it better to use an existing templating system (like Django's own templating system) as a starting point? Any suggestions on how I can approach this?

    Read the article

  • What are `Developmental Milestones` for programming skills?

    - by Holmes
    I studied in the field of Computer Science for 6 years, bachelor's degree and master's degree. I have studied all the basic programming like C, Java, VB, C#, Python, and etc. When I have free times, I will learn new programming languages and follow new programming trends by myself , such as PHP, HTML5, CSS5, LESS, Bootstrap, Symfony2, and GitHub. So, if someone wants me to write some instructions using these languages, I'm certain that I can do it, not so good but I can get a job done. However, I don't have any favorite programming language. Moreover, I also have studied about algorithms, database, and etc. Everything I just wrote so far seems that I know a lot in this field. In fact, I feel I am very stupid. I cannot answer 80% of the questions on SO. In spite of those languages??, I have studied. Perhaps it is because I have never worked before. As there is the Developmental Milestones for children, which refers to how a child becomes able to do more complex things as they get older, I would like to evaluate the same thing but for programming skills. What are the set of functional skills or age-specific tasks that most programmers can do at a certain age range? In order to evaluate myself, I would like to ask your opinions that all of the skills I mentioned above, are they enough for programmers to know when they are 25 years old? What are your suggestions in order to improve the skills in this field?

    Read the article

  • Calling a .NET C# class from XSLT

    - by HanSolo
    If you've ever worked with XSLT, you'd know that it's pretty limited when it comes to its programming capabilities. Try writing a for loop in XSLT and you'd know what I mean. XSLT is not designed to be a programming language so you should never put too much programming logic in your XSLT. That code can be a pain to write and maintain and so it should be avoided at all costs. Keep your xslt simple and put any complex logic that your xslt transformation requires in a class. Here is how you can create a helper class and call that from your xslt. For example, this is my helper class:  public class XsltHelper     {         public string GetStringHash(string originalString)         {             return originalString.GetHashCode().ToString();         }     }   And this is my xslt file(notice the namespace declaration that references the helper class): <?xml version="1.0" encoding="UTF-8" ?> <xsl:stylesheet  xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version="1.0" xmlns:ext="http://MyNamespace">     <xsl:output method="text" indent="yes" omit-xml-declaration="yes"/>     <xsl:template  match="/">The hash code of "<xsl:value-of select="stringList/string1" />" is "<xsl:value-of select="ext:GetStringHash(stringList/string1)" />".     </xsl:template> </xsl:stylesheet>   Here is how you can include the helper class as part of the transformation: string xml = "<stringList><string1>test</string1></stringList>";             XmlDocument xmlDocument = new XmlDocument();             xmlDocument.LoadXml(xml);               XslCompiledTransform xslCompiledTransform = new XslCompiledTransform();             xslCompiledTransform.Load("XSLTFile1.xslt");               XsltArgumentList xsltArgs = new XsltArgumentList();                        xsltArgs.AddExtensionObject("http://MyNamespace", Activator.CreateInstance(typeof(XsltHelper)));               using (FileStream fileStream = new FileStream("TransformResults.txt", FileMode.OpenOrCreate, FileAccess.ReadWrite, FileShare.ReadWrite))             {                 // transform the xml and output to the output file ...                 xslCompiledTransform.Transform(xmlDocument, xsltArgs, fileStream);                            }

    Read the article

  • Languages like Tcl that have configurable syntax?

    - by boost
    I'm looking for a language that will let me do what I could do with Clipper years ago, and which I can do with Tcl, namely add functionality in a way other than just adding functions. For example in Clipper/(x)Harbour there are commands #command, #translate, #xcommand and #xtranslate that allow things like this: #xcommand REPEAT; => DO WHILE .T. #xcommand UNTIL <cond>; => IF (<cond>); ;EXIT; ;ENDIF; ;ENDDO LOCAL n := 1 REPEAT n := n + 1 UNTIL n > 100 Similarly, in Tcl I'm doing proc process_range {_for_ project _from_ dat1 _to_ dat2 _by_ slice} { set fromDate [clock scan $dat1] set toDate [clock scan $dat2] if {$slice eq "day"} then {set incrementor [expr 24 * 60]} if {$slice eq "hour"} then {set incrementor 60} set method DateRange puts "Scanning from [clock format $fromDate -format "%c"] to [clock format $toDate -format "%c"] by $slice" for {set dateCursor $fromDate} {$dateCursor <= $toDate} {set dateCursor [clock add $dateCursor $incrementor minutes]} { # ... } } process_range for "client" from "2013-10-18 00:00" to "2013-10-20 23:59" by day Are there any other languages that permit this kind of, almost COBOL-esque, syntax modification? If you're wondering why I'm asking, it's for setting up stuff so that others with a not-as-geeky-as-I-am skillset can declare processing tasks.

    Read the article

  • ArchBeat Link-o-Rama for November 13, 2012

    - by Bob Rhubart
    This week on the OTN Solution Architect Homepage Make time to check out this week's features on the OTN Solution Architect Homepage, including: SOA Practitioner Guide: Identifying and Discovering Services Setting Up, Configuring, and Using an Oracle WebLogic Server Cluster OTN ArchBeat Podcast: Are You Future Proof (Conclusion) Keynote: New Paradigms for Application Architecture: From Applications to IT Services I this keynote address from the SOA, Cloud, and Service Technology Symposium, Anne Thomas Manes highlights the importance of adapting to the current trend marked by the convergence of mobile, social and cloud, moving away from app-centric design to service-based solutions. New Solaris Cluster! | Jeff Victor "Oracle Solaris Cluster 4.1 offers both High Availability (HA) and also Scalable Services capabilities," explains Jeff Victor. "HA delivers automatic restart of software on the same cluster node and/or automatic failover from a failed node to a working cluster node. Software and support is available for both x86 and SPARC systems." You'll find download links and other resources in Jeff's short post. ADF BC View Accessor To Centralize Business Logic Processing | Andrejus Baranovskis Oracle ACE Director Andrejus Baranovskis illustrates one way to implement a use case that requires a comparison between the current row status and the data returned by another query (no master-detail relationship). Thought for the Day "The danger from computers is not that they will eventually get as smart as men, but that we will meanwhile agree to meet them halfway." — Bernard Avishai Source: SoftwareQuotes.com

    Read the article

  • Data Movement and the Decision Matrix

    - by BuckWoody
    Maybe it’s my military background, or maybe I’ve always had this predilection, but I like to use two devices when I need to make a complex decision: A checklist and a decision matrix. I like to use a checklist because it ensures that I remember the big bits of what I need to do, and brings up questions or areas that I didn’t think about when evaluating options for the decision. And the decision matrix – that’s the thing I use to actually lay out those options. It’s simply a spreadsheet-like grid (I use Excel, but paper and pencil works as well) that lays out the requirements or advantages for the decision across the top, and the options I have on the left-hand side. Then in the “cells” I put whether or not that option on the left will meet the requirement in that column. I then simply “weight” each cell to organize the choices by best-fit. The right answer (or answers) will float right to the top. I was asked yesterday about options for moving data in SQL Server to another system. There are just dozens of ways to do this, from bcp to Replication, each with certain advantages and costs. But asking the questions for the top row first helped me show the person that it isn’t a particular technology that is important, it’s laying out those requirements and thinking about which elements are more important than the other. For instance, is it more important to have the data moved all the time, or is it OK if that happens once in a while? Does the data have to move in two directions or just one? All of these will help that answer jump right out. Try it sometime – it’s a great learning exercise, since it will force you to focus on filling out the matrix. The answer is out there, Neo. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Terminal closing itself after 14.04 upgrade

    - by David
    All was fine in 12.04, in this case I'm using virtualbox in Windows. Last days the warning message about my Ubuntu version no longer being supported was coming up pretty often, so, yesterday I finally decided to upgrade. The upgrading process ran ok, no errors, no warnings. After rebooting the errors started to happen. Just after booting up there were some errors about video, gnome, and video textures (sorry I didn't care in that moment so I don't remember well). Luckly that went away after installing VirtualBox additions. But the big problem here is that I can't use the terminal. It opens Ok when pressing control+alt+t, but most of the commands cause instant closing. For example, df, ls, mv, cd... usually work, although it has closed few times. But 'find' causes instant close. 'apt-get' update kills it too, just after it gets the package list from the sources, when it starts processing them. I've tried xterm, everything works and I have none of that problems. I have tried reinstalling konsole, bash-static, bash-completion, but nothing worked. I have no idea what to do as there is no error message to search for the cause. It seems something related to bash, but that's all I know.

    Read the article

  • Question aboud Headings For Professionals <H1>... <H9> in SEO & Browsercompatibility Differences

    - by Sam
    We all know the importance ans significance of Headings for Professional Webmasters. These were known for professional developers as <h1>Heading 1</h1> h2 ... h6. As a daring webdeveloper I lately needed more short headings for complex structured document and i thought what the hell and went ahead and used in css h1,h2,h3,h4,h5,h6{ } h7{ } h8{ } h9{ } My experiment turned out to pay back. But only in Firefox, Safari, Chrome etc, not in Internet Explorer 8. Q1. Who(&When) decided that All headings should go upto h6, and not h4 or h7? Q2. Why h7 -h9 work perfect in all major browsers, except IE8? Q3. What is the significance for Bing,Yahoo and Googld in terms of recognition or headings h1 ~ h9? obviously h1 is more important than 2, but do they differentiate between h5 and h6? or not anymore after h3?

    Read the article

  • Best peer-to-peer game architecture

    - by Dejw
    Consider a setup where game clients: have quite small computing resources (mobile devices, smartphones) are all connected to a common router (LAN, hotspot etc) The users want to play a multiplayer game, without an external server. One solution is to host an authoritative server on one phone, which in this case would be also a client. Considering point 1 this solution is not acceptable, since the phone's computing resources are not sufficient. So, I want to design a peer-to-peer architecture that will distribute the game's simulation load among the clients. Because of point 2 the system needn't be complex with regards to optimization; the latency will be very low. Each client can be an authoritative source of data about himself and his immediate environment (for example bullets.) What would be the best approach to designing such an architecture? Are there any known examples of such a LAN-level peer-to-peer protocol? Notes: Some of the problems are addressed here, but the concepts listed there are too high-level for me. Security I know that not having one authoritative server is a security issue, but it is not relevant in this case as I'm willing to trust the clients. Edit: I forgot to mention: it will be a rather fast-paced game (a shooter). Also, I have already read about networking architectures at Gaffer on Games.

    Read the article

  • Leveraging Code Across Platforms in Ever Bigger Games

    - by ashes999
    Summary: The same way that I continually build complex engines and libraries within a single platform and technology to allow me to build increasingly bigger and better games, how to continue this when development crosses into different platforms? If I switch platforms, how do I leverage past code and experiences? Games are hard to build. Big games are even harder to build. I've decided that to be able to make big games, I need to start building smaller games, and building up an asset base of code, assets (graphics, sounds), tools, and most importantly, game engines, so that I can eventually get there. One game at a time. Let me give an analogy. To build an MMO 3D RPG, I would approach this by building and releasing small games with increasingly more features. This could entail, for example: A simple 2D game A tile-based game A game with RPG elements (items, equipment, monsters, battle) A full-fledged RPG A 3D RPG The problem now is if I have to change platforms or tools, I don't know how to leverage past code-bases (and experience) to start with a mature product. Right now, I'm writing Silverlight (FlatRedBall) games. Let's say I stick with this for ten years, and then suddenly decide to write a PS6 game, which is in a different programming language entirely. Granted, I have ten years of game-development experience (and correspondingly ten years of professional software development experience from my day job) to back me up. But I would still like some way to transplant that 2D RPG engine into the new programming language, or else leverage it somehow. Is this even possible? What are my options?

    Read the article

  • Is there an easy way to configure an Ubuntu system to function as a proxy/file server from behind an NAT?

    - by amol.kamath
    Sorry for the long question, but the situation/desire is quite complex. Here is my setup: I have a laptop which I carry around everywhere and I have a desktop sitting at home, connected to the internet through a router using NAT. My objective is to create a connection from my laptop to my desktop that can allow me to (in order of priority): Use the desktop as a proxy server Access files on the desktop remotely Control said desktop from the laptop using VNC or similar. Now here is the scene. I have already looked up and tried several ways to achieve the above goals. Teamviewer - I used it and didn't like it. This is not an option. SSH - This seems ideal, I have figured a way to use this for both proxy and file sharing. However, I am currently unable to connect it due to the NAT. I have a separate thread trying to get that to work here. VPN - I've figured out how to use this method for proxy, but not file sharing. However this faces the same problem as the above: I can't get it to connect through the NAT. Does anyone have any other solutions for what I want? Otherwise, if there are solutions to connecting through the NAT, please tell me (in the other thread). Thanks

    Read the article

  • Can JSON be made easily and safely editable by the non-technical Excel crowd?

    - by glitch
    I'm looking for a data storage format that's very intuitive and easy to edit. It should be ideally targeted towards the same crowd as Excel. At the same time I would like the data structure to be a tree. Ideally this would be JSON, since it offers both the tree aspect and allows for more interesting constructs like arrays. That and parsing libraries for JSON are ubiquitous, so I don't have to reinvent the wheel. The problem is that, at least with a non-specialized text editor, JSON is a giant pain to edit for a non-technical user. I'm thinking along the lines of someone who might have used Excel in the past, but never a real text editor. Someone who might not be comfortable with the idea of preserving JSON syntax by hand. Are there data formats out there that would fit this profile? I'd very much prefer this to be a JSON actually, but then it would require a solid editing tool that would hide the underlying implementation from the user. Think Excel and how it abstracts CSV syntax from the user. The reason I'm looking for something like this is because the team has been working with pretty hierarchical data for a while now and we've hit the limits of how easy it is to represent in simple CSVs without having to create complex rules for how represent hierarchy semantics from each row. Any suggestions?

    Read the article

  • Experiencing the New Social Enterprise

    - by kellsey.ruppel
    Social media and networking tools, popularly known as Web 2.0 technologies, are rapidly transforming user expectations of enterprise systems. Many organizations are investing in these new tools to cultivate a modern user experience in an “Enterprise 2.0” environment that unlocks the full potential of traditional IT systems and fosters collaboration in key business processes. Here are some key points and takeaways from some of the keynotes yesterday at the Enterprise 2.0 Conference: Social networks continue to forge complex connections between people, processes, and content, facilitating collaboration and the sharing of information The customer of today lives inside of Facebook, on your web, or has an app for that – and they have a question – and want an answer NOW Empowered employees are able to connect to colleagues, build relationships, develop expertise, self-select projects of interest to them, and expand skill sets well beyond their formal roles A fundamental promise of Enterprise 2.0 is that ideas will be generated and shared by everyone across the organization, leading to increased innovation, agility, and competitive advantage How well is your organizating delivering on these concepts? Are you able to successfully bring together people, processes and content? Are you providing the social tools your employees want and need? Are you experiencing the new social enterprise?

    Read the article

< Previous Page | 522 523 524 525 526 527 528 529 530 531 532 533  | Next Page >