Search Results

Search found 34513 results on 1381 pages for 'end task'.

Page 884/1381 | < Previous Page | 880 881 882 883 884 885 886 887 888 889 890 891  | Next Page >

  • Gathering statistics for an Oracle WebCenter Content Database

    - by Nicolas Montoya
    Have you ever heard: "My Oracle WebCenter Content instance is running slow. I checked the memory and CPU usage of the application server and it has plenty of resources. What could be going wrong?An Oracle WebCenter Content instance runs on an application server and relies on a database server on the back end. If your application server tier is running fine, chances are that your database server tier may host the root of the problem. While many things could cause performance problems, on active Enterprise Content Management systems, keeping database statistics updated is extremely important.The Oracle Database have a set of built-in optimizer utilities that can help make database queries more efficient. It is strongly recommended to update or re-create the statistics about the physical characteristics of a table and the associated indexes in order to maximize the efficiency of optimizers. These physical characteristics include: Number of records Number of pages Average record length The frequency with which you need to update statistics depends on how quickly the data is changing. Typically, statistics should be updated when the number of new items since the last update is greater than ten percent of the number of items when the statistics were last updated. If a large amount of documents are being added or removed from the system, the a post step should be added to gather statistics upon completion of this massive data change. In some cases, you may need to collect statistics in the middle of the data processing to expedite its execution. These proceses include but are not limited to: data migration, bootstrapping of a new system, records management disposition processing (typically at the end of the calendar year), etc. A DOCUMENTS table with a ten million rows will often generate a very different plan than a table with just a thousand.A quick check of the statistics for the WebCenter Content (WCC) Database could be performed via the below query:SELECT OWNER, TABLE_NAME, NUM_ROWS, BLOCKS, AVG_ROW_LEN,TO_CHAR(LAST_ANALYZED, 'MM/DD/YYYY HH24:MI:SS')FROM DBA_TABLESWHERE TABLE_NAME='DOCUMENTS';OWNER                          TABLE_NAME                       NUM_ROWS------------------------------ ------------------------------ ----------    BLOCKS AVG_ROW_LEN TO_CHAR(LAST_ANALYZ---------- ----------- -------------------ATEAM_OCS                      DOCUMENTS                            4172        46          61 04/06/2012 11:17:51This output will return not only the date when the WCC table DOCUMENTS was last analyzed, but also it will return the <DATABASE SCHEMA OWNER> for this table in the form of <PREFIX>_OCS.This database username could later on be used to check on other objects owned by the WCC <DATABASE SCHEMA OWNER> as shown below:SELECT OWNER, TABLE_NAME, NUM_ROWS, BLOCKS, AVG_ROW_LEN,TO_CHAR(LAST_ANALYZED, 'MM/DD/YYYY HH24:MI:SS')FROM DBA_TABLESWHERE OWNER='ATEAM_OCS'ORDER BY NUM_ROWS ASC;...OWNER                          TABLE_NAME                       NUM_ROWS------------------------------ ------------------------------ ----------    BLOCKS AVG_ROW_LEN TO_CHAR(LAST_ANALYZ---------- ----------- -------------------ATEAM_OCS                      REVISIONS                            2051        46         141 04/09/2012 22:00:22ATEAM_OCS                      DOCUMENTS                            4172        46          61 04/06/2012 11:17:51ATEAM_OCS                      ARCHIVEHISTORY                       4908       244         218 04/06/2012 11:17:49OWNER                          TABLE_NAME                       NUM_ROWS------------------------------ ------------------------------ ----------    BLOCKS AVG_ROW_LEN TO_CHAR(LAST_ANALYZ---------- ----------- -------------------ATEAM_OCS                      DOCUMENTHISTORY                      5865       110          72 04/06/2012 11:17:50ATEAM_OCS                      SCHEDULEDJOBSHISTORY                10131       244         131 04/06/2012 11:17:54ATEAM_OCS                      SCTACCESSLOG                        10204       496         268 04/06/2012 11:17:54...The Oracle Database allows to collect statistics of many different kinds as an aid to improving performance. The DBMS_STATS package is concerned with optimizer statistics only. The database sets automatic statistics collection of this kind on by default, DBMS_STATS package is intended for only specialized cases.The following subprograms gather certain classes of optimizer statistics:GATHER_DATABASE_STATS Procedures GATHER_DICTIONARY_STATS Procedure GATHER_FIXED_OBJECTS_STATS Procedure GATHER_INDEX_STATS Procedure GATHER_SCHEMA_STATS Procedures GATHER_SYSTEM_STATS Procedure GATHER_TABLE_STATS ProcedureThe DBMS_STATS.GATHER_SCHEMA_STATS PL/SQL Procedure gathers statistics for all objects in a schema.DBMS_STATS.GATHER_SCHEMA_STATS (    ownname          VARCHAR2,    estimate_percent NUMBER   DEFAULT to_estimate_percent_type                                                 (get_param('ESTIMATE_PERCENT')),    block_sample     BOOLEAN  DEFAULT FALSE,    method_opt       VARCHAR2 DEFAULT get_param('METHOD_OPT'),   degree           NUMBER   DEFAULT to_degree_type(get_param('DEGREE')),    granularity      VARCHAR2 DEFAULT GET_PARAM('GRANULARITY'),    cascade          BOOLEAN  DEFAULT to_cascade_type(get_param('CASCADE')),    stattab          VARCHAR2 DEFAULT NULL,    statid           VARCHAR2 DEFAULT NULL,    options          VARCHAR2 DEFAULT 'GATHER',    objlist          OUT      ObjectTab,   statown          VARCHAR2 DEFAULT NULL,    no_invalidate    BOOLEAN  DEFAULT to_no_invalidate_type (                                     get_param('NO_INVALIDATE')),  force             BOOLEAN DEFAULT FALSE);There are several values for the OPTIONS parameter that we need to know about: GATHER reanalyzes the whole schema     GATHER EMPTY only analyzes tables that have no existing statistics GATHER STALE only reanalyzes tables with more than 10 percent modifications (inserts, updates,   deletes) GATHER AUTO will reanalyze objects that currently have no statistics and objects with stale statistics. Using GATHER AUTO is like combining GATHER STALE and GATHER EMPTY. Example:exec dbms_stats.gather_schema_stats( -   ownname          => '<PREFIX>_OCS', -   options          => 'GATHER AUTO' -);

    Read the article

  • Be There: Tinkerforge/NetBeans Platform Integration Course

    - by Geertjan
    Tinkerforge is an electronic construction kit. It exposes a number of API bindings, including, of course, Java. The nice thing also is that Tinkerforge products are open source, both on the hardware and software levels, so that you can take their bases as a starting point for your own modifications. "The TinkerForge system is a set of pre-built electronics boards that are built in such a way that you can stack the boards (known as bricks), attach accessories (known as bricklets), and have your prototype and and running quickly. Unlike systems, such as the Arduino or Launchpad, the TinkerForge has to be attached to a computer and the computer does all of the work. With an easy set of application programming interfaces (APIs) available in C/C++, C#, Java, PHP, and Ruby, the system is easy to interface and program over USB in a snap." (from this useful article) Henning Krüp, who has arranged several NetBeans Platform Certified Training Courses in the past, in the Nordhorn/Lingen area in Germany, had the inspired idea to focus the next course on integration with Tinkerforge. In other words, the whole course will be focused on creating a standalone Java desktop application that leverages the NetBeans Platform to interact with Tinkerforge! Interested in joining the course or setting up something similar yourself? The course organized by Henning will be held from 19 to 21 September, as explained here, together with contact details.  If you'd like to organize a similar course at a location of your choosing, leave a comment at the end of this blog entry and we'll set something up together!

    Read the article

  • Starting an HTML canvas game with no graphics skills

    - by Jacob
    I want to do some hobby game development, but I have some unfortunate handicaps that have me stuck in indecision; I have no artistic talent, and I also have no experience with 3D graphics. But this is just a hobby project that might not go anywhere, so I want to develop the stuff I care about; if the game shows good potential, my graphic "stubs" can be replaced with something more sophisticated. I do, however, want my graphics engine to render something approximate to the end goal. The game is tile-based, with each tile being a square. Each tile also has an elevation. My target platform (subject to modification) is JavaScript rendering to the HTML 5 canvas, either with a 2D or WebGL context. My question to those of you with game development experience is whether it's easier to develop an isometric game using a 2D graphics engine and sprites or a 3D game using rudimentary 3D primitives and basic textures? I realize that there are limitations to isometric projection, but if it makes developing my throwaway graphics engine easier, I'm OK with the visual warts that would be introduced. Or is representing a 3D world with an actual 3D engine easier?

    Read the article

  • Educause Top-Ten IT Issues - the most change in a decade or more

    - by user739873
    The Education IT Issue Panel has released the 2012 top-ten issues facing higher education IT leadership, and instead of the customary reshuffling of the same deck, the issues reflect much of the tumult and dynamism facing higher education generally.  I find it interesting (and encouraging) that at the top of this year's list is "Updating IT Professionals' Skills and Roles to Accommodate Emerging Technologies and Changing IT Management and Service Delivery Models."  This reflects, in my view, the realization that higher education IT must change in order to fully realize the potential for transforming the institution, and therefore it's people must learn new skills, understand and accept new ways of solving problems, and not be tied down by past practices or institutional inertia. What follows in the remaining 9 top issues all speak, in some form or fashion, to the need for dramatic change, but not just in the areas of "funding IT" (code for cost containment or reduction), but rather the need to increase effectiveness and efficiency of the institution through the use of technology—leveraging the wave of BYOD (Bring Your Own Device) to the institution's advantage, rather than viewing it as a threat and a problem to be contained. Although it's #10 of 10, IT Governance (and establishment and implementation of the governance model throughout the institution) is key to effectively acting upon many of the preceding issues in this year's list.  In the majority of cases, technology exists to meet the needs and requirements to effectively address many of the challenges outlined in top-ten issues list. Which brings me to my next point. Although I try not to sound too much like an Oracle commercial in these (all too infrequent) blog posts, I can't help but point out how much confluence there is between several of the top issues this year and what my colleagues and I have been evangelizing for some time. Starting from the bottom of the list up: 1) I'm gratified that research and the IT challenges it presents has made the cut.  Big Data (or Large Data as it's phased in the report) is rapidly going to overwhelm much of what exists today even at our most prepared and well-equipped research universities.  Combine large data with the significantly more stringent requirements around data preservation, archiving, sharing, curation, etc. coming from granting agencies like NSF, and you have the brewing storm that could result in a lot of "one-off" solutions to a problem that could very well be addressed collectively and "at scale."   2) Transformative effects of IT – while I see more and more examples of this, there is still much more that can be achieved. My experience tells me that culture (as the report indicates or at least poses the question) gets in the way more than technology not being up to task.  We spend too much time on "context" and not "core," and get lost in the weeds on the journey to truly transforming the institution with technology. 3) Analytics as a key element in improving various institutional outcomes.  In our work around Student Success, we see predictive "academic" analytics as essential to getting in front of the Student Success issue, regardless of how an institution or collections of institutions defines success.  Analytics must be part of the fabric of the key academic enterprise applications, not a bolt-on.  We will spend a significant amount of time on this topic during our semi-annual Education Industry Strategy Council meeting in Washington, D.C. later this month. 4) Cloud strategy for the broad range of applications in the academic enterprise.  Some of the recent work by Casey Green at the Campus Computing Survey would seem to indicate that there is movement in this area but mostly in what has been termed "below the campus" application areas such as collaboration tools, recruiting, and alumni relations.  It's time to get serious about sourcing elements of mature applications like student information systems, HR, Finance, etc. leveraging a model other than traditional on-campus custom. I've only selected a few areas of the list to highlight, but the unifying theme here (and this is where I run the risk of sounding like an Oracle commercial) is that these lofty goals cry out for partners that can bring economies of scale to bear on the problems married with a deep understanding of the nuances unique to higher education.  In a recent piece in Educause Review on Student Information Systems, the author points out that "best of breed is back". Unfortunately I am compelled to point out that best of breed is a large part of the reason we have made as little progress as we have as an industry in advancing some of the causes outlined above.  Don't confuse "integrated" and "full stack" for vendor lock-in.  The best-of-breed market forces that Ron points to ensure that solutions have to be "integratable" or they don't survive in the marketplace. However, by leveraging the efficiencies afforded by adopting solutions that are pre-integrated (and possibly metered out as a service) allows us to shed unnecessary costs – as difficult as these decisions are to make and to drive throughout the organization. Cole

    Read the article

  • Reflection and the params Keyword

    - by Robert May
    I’ve had to look this up a couple of times, and there’s not much out there, so I end up guessing the same answer over and over. When using MethodBase.GetParameters() to get an array of ParameterInfo object, I often want to get a count of the number of parameters that are out, optional, params, etc.  For out and optional, you can simply check ParameterInfo.IsOut or ParameterInfo.IsOptional or any number of other “Attributes”. However, for params, there isn’t a property on ParameterInfo.  Instead, you have to do this: info.GetCustomAttributes(typeof(ParamArrayAttribute), true) This will get you a set of all of the attributes that are the ParamArrayAttribute, which you can then turn into a linq statement that looks like this: methodParameters.Count(info => info.GetCustomAttributes(typeof(ParamArrayAttribute), true).Count() > 0); Which, assuming that methodParameters is the result of MethodBase.GetParameters, will give you a count of the number of parameters that have the params keyword.  Of course, there can be only one, but who’s counting! Now, hopefully, the next time I try to look this up, my own blog will get the values. Technorati Tags: Reflection

    Read the article

  • How is"cloud computing"different from "client-server"?

    - by BellevueBob
    Watching a CEO for a new "cloud computing" company describe his company on a finance TV program today, he said something like "Cloud computing is superior to old-fashioned client-server computing". Now I'm confused. Can someone please explain what "cloud computing" means in contrast to client-server? As far as I understand it, cloud computing is more of a network services model, such that I do not own or maintain the physical hardware. The "cloud" is all the back-end stuff. But I still might have an application that communicates with that "cloud" environment. And if I run a web site presents a form that a user fills out, pushes a button on the page, and returns some report that was generated by the web server, isn't that the same as "cloud" computing? And would you not consider my web browser as the "client"? Please note my question is specific to the concept of "cloud computing" with respect to "client-server". Sorry if this is an inappropriate question for this site; it's the one closest in the Stack universe and this is my first time here. I'm an old timer, programming since mainframe days in the late 70's.

    Read the article

  • Ten Classic Electronic Toys and Their Modern Equivalents

    - by Jason Fitzpatrick
    Whether you’re looking to relive the toy exploits of your youth or pass your love of tinkering and electronics onto the younger generation, this list highlights ten great electronic toys of yesteryear and their modern equivalents. Courtesy of Wired’s Geek Dad, the description for the all-in-one electronics kit seen here: What is was: Arthur C. Clarke has said that any sufficiently advanced technology is indistinguishable from magic. As a kid in the midst of an increasing technological revolution, electronics were at the heart of that. Learning electronics was made easy through the Science Fair Electronic Project Kits found at Radioshack. Through the project guides, kids could construct various ‘experiments’ by attaching wires to terminal springs that make circuits. The terminal springs would wire in components such as LED segment lights, photo sensors, resistors, diodes, etc. While it was fun getting the projects to work, the manuals lacked in depth explanation as to what was happening in the circuit to produce the project’s result. Why it was awesome: First, it was a simple buy for parents. Everything you needed to get your child interested in electronics was right in the kit. You didn’t need to breadboard or solder. I remember a distinct feeling of accomplishment making a high-water alarm or a light-sensor game with the realization that the bundles of wires springing up from the kit were actually doing something! Modern equivalent: You can still pick up variations of the 100-in-1 kits, but their popular replacement seem to be Snap Circuits by Elenco. All of the components are mounted on a plastic base with a contact on either end which interconnect with each other and the plastic base that projects can be mounted to. Each component also has the electrical diagram symbol for that component drawn on it so it can help you read schematics. For that reason alone, I like these better. HTG Explains: Why You Only Have to Wipe a Disk Once to Erase It HTG Explains: Learn How Websites Are Tracking You Online Here’s How to Download Windows 8 Release Preview Right Now

    Read the article

  • learn the programming language for computing functions about integers

    - by asd
    Hi I know something about Pascal, Mathematica and Matlab, but I dont have any idea about C,C++,C# languages. I want to learn one of the languages that they they are fast and exact to compute some arithmetic functions for large numbers(for example larger than $10^3000$). I asked somebody and he said he used C++ and he said I computed this sequence in less than 10 min. I want to know C, C++, C# and visual kind of theses programs and know which is better for my goal. Let $f$ be an arithmetic function and A={k1,k2,...,kn} are integers in increasing order. Now I want to start with k1 and compare f(ki) with f(k1). If f(ki)f(k1), put ki as k1. Now start with ki, and compare f(kj) with f(ki), for ji. If f(kj)f(ki), put kj as ki, and repeat this procedure. At the end we will have a sub sequence B={L1,...,Lm} of A by this property: f(L(i+1))f(L(i)), for any 1<=i<=m-1 I have written a code for this program with Mathematica, and it take some hours to compute f of ki's or the set B for large numbers. For example, let f is the divisor function of integers. Do you know how to write the code for my purpose in Mathematica or Matlab. Mathematica is preferable.

    Read the article

  • Collapsing Bookmarks

    - by Tim Dexter
    I said I would tackle documenting some of the new features in the 10.1.3.4.1 roll up patch I mentioned last week. With the patch you can now set the default state of bookmarks (if you create them) in your PDF outputs. If your users prefer to see them all collapsed to the base level or may be collapsed to the second level to ease navigation; whatever they need. Its another opportunity for you to look like a star! You of course need to start with a table of contents; then add the convert|copy to bookmarks command. You can then add the new collapse command to set the appropriate level in the bookmarks. <?copy-to-bookmark:?> <?collapse-bookmark:show;2?> <<< Table of Contents >>> <?end convert-to-bookmark?> The command allows you to expand or collapse the bookmarks as you need. Of course you will know how many levels you will have in the final output document. The command takes the form: <?collapse-bookmark:show|hide;level int?> Some examples <?collapse-bookmark:hide;1?> <?collapse-bookmark:hide;2?> <?collapse-bookmark:hide;3?> Sample template and data here. Dont forget you need that 10.1.3.4.1 roll up!

    Read the article

  • Help Creating a Google Analytics Funnel for Check out process

    - by Drew
    have a funnel question. I am currently working on tracking (through GA) guest and logged in member activity once they get to my sites shopping cart. But need help with setting up funnels. Specifically to see; Total sales Logged in member total sales List item Guest member sales The urls associated to the check out proces are: Logged in members /cart (arriving to checkout) /checkout (checking out as a logged in member) /checkout/confirmation (thank you - confirmed sale) Guest members - /cart (arriving to checkout) - /checkout-guest (checking out as a guest) - /checkout/confirmation (thanks you - confirmed sale) I've tested the funnels set up for the above with 9 transactions. But the end maths doesn't seem to line up. Total sales funnel shows 9 completed transactions when only tracking these to urls: - /cart - /checkout/confirmation Which is great - cause it's working Logged in member sales show a total of 9 completed transactions based on each step of the logged in url steps (above) being tracked in a funnel. Not good because this number should be 3. Guest check out funnel (see guest steps above) shows 9 as well. What the?!?!?!? The results I am looking for should reflect the following - total sales = 9, logged in members = 3, guest members = 6 Is there any way to set these urls up so that the funnels report the correct results - or do I need to changed the urls and provide logged in members and guest stand alone purchase confirmation pages (this would mean I can not track total sales which combine results from both streams)? Any knowledge in this area is welcome. Thanks.

    Read the article

  • Minimizing data sent over a webservice call on expensive connection

    - by aceinthehole
    I am working on a system that has many remote laptops all connected to the internet through cellular data connections. The application will synchronize periodically to a central database. The problem is, due to factors outside our control, the cost to move data across the cellular networks are spectacularly expensive. Currently the we are sending a compressed XML file across the wire where it is being processed and various things are done with (mainly stuffing it into a database). My first couple of thoughts were to convert that XML doc to json, just prior to transmission and convert back to XML just after receipt on the other end, and get some extra compression for free without changing much. Another thought was to test various other compression algorithms to determine the smallest one possible. Although, I am not entirely sure how much difference json vs xml would make once it is compressed. I thought that their must be resources available that address this problem from an information theory perspective. Does anyone know of any such resources or suggestions on what direction to go in. This developed on the MS .net stack on windows for reference.

    Read the article

  • Logging library for (c++) games

    - by Klaim
    I know a lot of logging libraries but didn't test a lot of them. (GoogleLog, Pantheios, the coming boost::log library...) In games, especially in remote multiplayer and multithreaded games, logging is vital to debugging, even if you remove all logs in the end. Let's say I'm making a PC game (not console) that needs logs (multiplayer and multithreaded and/or multiprocess) and I have good reasons for looking for a library for logging (like, I don't have time or I'm not confident in my ability to write one correctly for my case). Assuming that I need : performance ease of use (allow streaming or formating or something like that) reliable (don't leak or crash!) cross-platform (at least Windows, MacOSX, Linux/Ubuntu) Wich logging library would you recommand? Currently, I think that boost::log is the most flexible one (you can even log to remotely!), but have not good performance. Pantheios is often cited but I don't have comparison points on performance and usage. I've used my own lib for a long time but I know it don't manage multithreading so it's a big problem, even if it's fast enough. Google Log seems interesting, I just need to test it but if you already have compared those libs and more, your advice might be of good use. Games are often performance demanding while complex to debug so it would be good to know logging libraries that, in our specific case, have clear advantages.

    Read the article

  • Overriding component behavior

    - by deft_code
    I was thinking of how to implement overriding of behaviors in a component based entity system. A concrete example, an entity has a heath component that can be damaged, healed, killed etc. The entity also has an armor component that limits the amount of damage a character receives. Has anyone implemented behaviors like this in a component based system before? How did you do it? If no one has ever done this before why do you think that is. Is there anything particularly wrong headed about overriding component behaviors? Below is rough sketch up of how I imagine it would work. Components in an entity are ordered. Those at the front get a chance to service an interface first. I don't detail how that is done, just assume it uses evil dynamic_casts (it doesn't but the end effect is the same without the need for RTTI). class IHealth { public: float get_health( void ) const = 0; void do_damage( float amount ) = 0; }; class Health : public Component, public IHealth { public: void do_damage( float amount ) { m_damage -= amount; } private: float m_health; }; class Armor : public Component, public IHealth { public: float get_health( void ) const { return next<IHealth>().get_health(); } void do_damage( float amount ) { next<IHealth>().do_damage( amount / 2 ); } }; entity.add( new Health( 100 ) ); entity.add( new Armor() ); assert( entity.get<IHealth>().get_health() == 100 ); entity.get<IHealth>().do_damage( 10 ); assert( entity.get<IHealth>().get_health() == 95 ); Is there anything particularly naive about the way I'm proposing to do this?

    Read the article

  • What is the best type of c# timer to use with an Unity game that uses many timers simultaneously?

    - by Kyle Seidlitz
    I am developing a stand-alone 3d game in Unity that will have anywhere from 1 to 200 timers running simultaneously. For this game timer durations will range from 5 minutes to 4 days. There will not be any countdown displays or any UI for the timers. An object will be selected, a menu choice will then be selected, and the timer will start. Several events will occur at different intervals during the duration of the timer. The events will be confined to changing the material of the selected object, and calling a 1 second sound effect like a chime or a bell. If the user wants to save or end the game before all the timers are done, the start of the still running timers is to be saved to an XML file such that when the game is started again, any still running timers will have a calculation done to see if the timer is then done, where the game will change the materials appropriately. I am still trying to figure out what type of timer to use, and see also if there are any suggestions for saving and calculating times over several days. What class(es) of timers should I use? Are there any special issues I should look out for in terms of performance?

    Read the article

  • Update

    - by Jeff Certain
    This blog has been pretty quiet for a year now. There's a few reasons for that. Probably the biggest reason is that I view this as a space where I talk about .NET things. Or software development. While I've been doing the latter for the past year, I haven't been doing the former.Yes, I took a trip to the dark side. I started with Ning 11 months ago, in Palo Alto, CA. I had the chance to work with an incredibly talented group of software engineers... in PHP and Java.That was definitely an eye-opening experience, in terms of technology, process, and culture. It was also a pretty good example of how acquisitions can get interesting. I'll talk more about this, I'm sure.Last week, I started with a company called Dynamic Signal. I'm a "Back End Engineer" now. Also a very talented team of people, and I'm delighted to be working with them. We're a Microsoft shop. After a year away, I'm very happy to be back. Coming back to .NET is an easy transition, and one that has me being fairly productive straight out of the gate.(Some of you may have noticed, my last post was more than a year ago. Yes, it's safe to infer that I didn't get renewed as an MVP. Fair deal; I didn't do nearly as much this year as I have in the past. I'll be starting to speak again shortly, and hope to be re-awarded soon.)At any rate, now that I'm back in the .NET space, you can expect to hear more from me soon!

    Read the article

  • Design pattern for an automated mechanical test bench

    - by JJS
    Background I have a test fixture with a number of communication/data acquisition devices on it that is used as an end of line test for a product. Because of all the various sensors used in the bench and the need to run the test procedure in near real-time, I'm having a hard time structuring the program to be more friendly to modify later on. For example, a National Instruments USB data acquisition device is used to control an analog output (load) and monitor an analog input (current), a digital scale with a serial data interface measures position, an air pressure gauge with a different serial data interface, and the product is interfaced through a proprietary DLL that handles its own serial communication. The hard part The "real-time" aspect of the program is my biggest tripping point. For example, I need to time how long the product needs to go from position 0 to position 10,000 to the tenth of a second. While it's traveling, I need to ramp up an output of the NI DAQ when it reaches position 6,000 and ramp it down when it reaches position 8,000. This sort of control looks easy from browsing NI's LabVIEW docs but I'm stuck with C# for now. All external communication is done by polling which makes for lots of annoying loops. I've slapped together a loose Producer Consumer model where the Producer thread loops through reading the sensors and sets the outputs. The Consumer thread executes functions containing timed loops that poll the Producer for current data and execute movement commands as required. The UI thread polls both threads for updating some gauges indicating current test progress. Unsure where to start Is there a more appropriate pattern for this type of application? Are there any good resources for writing control loops in software (non-LabVIEW) that interface with external sensors and whatnot?

    Read the article

  • Libgdx ParallaxScrolling and TiledMaps

    - by kirchhoff
    I implemented ParallaxScrolling for my SideScroller project, everything is working but the tiled map (the most important part!). I've been trying out everything but it doesn't work (see the code below). I'm using ParallaxCamera from GdxTests, it's working perfectly for the background layers. I can't explain myself properly in english, so I recorded 2 videos: Before parallaxScrolling After parallaxScrolling As you can see, now the platforms appear in the middle of the Y-axis. I've got a Map class with 2 tiled maps, so I need two renderers too: private TiledMapRenderer renderer1; private TiledMapRenderer renderer2; public void update(GameCamera camera) { renderer1.setView(camera.calculateParallaxMatrix(1f, 0f), camera.position.x - camera.viewportWidth / 2, **camera.position.y - camera.viewportHeight/2**, camera.viewportWidth, camera.viewportHeight); renderer2.setView(camera.calculateParallaxMatrix(1f, 0f), camera.position.x - camera.viewportWidth / 2, **camera.position.y - camera.viewportHeight/2**, camera.viewportWidth, camera.viewportHeight); } In bold, the code I think I should change. I've tried changing parameters, even adding hardcoded values, etc, but one of two: 1. Nothing happens. 2. Platforms disappear. Here is some aditional code. The render method: world.update(delta); parallaxBackground.update(camera); clear(0.5f, 0.7f, 1.0f, 1); batch.setProjectionMatrix(camera.calculateParallaxMatrix(0, 0)); batch.disableBlending(); batch.begin(); batch.draw(background, -(int)background.getRegionWidth()/2, -(int)background.getRegionHeight()/2); batch.end(); batch.enableBlending(); parallaxBackground.draw(batch, camera); renderer.render(batch);

    Read the article

  • Cloned partition not seen correctly by disk utility & gparted

    - by enrico
    Some days ago I cloned my /dev/sda1 partition with clonezilla in partition-to-partition mode to /dev/sda3. It worked, but now that I've finished the setup of system in /dev/sda3, I wanna reinstall /dev/sda1 for other stuffs. This partition is NOT mounted, but ubuntu's DISK UTILITY thinks it is, while it doesn't see as mounted the currently active / partition /dev/sda3. This is the df- TH output : Filesystem Type Size Used Avail Use% Mounted on /dev/sda3 ext4 32G 6.2G 24G 21% / udev devtmpfs 2.2G 13k 2.2G 1% /dev tmpfs tmpfs 845M 906k 844M 1% /run none tmpfs 5.3M 0 5.3M 0% /run/lock none tmpfs 2.2G 115k 2.2G 1% /run/shm /dev/sdb1 fuseblk 321G 147G 174G 46% /Dati Gparted instead, sees /dev/sda1 as NOT mounted (checked with Information option), but it display the BOOT flag on this partition, while the real booted partition /dev/sda3, hasn't it. If I try to format the /dev/sda1 partition, it gives me this error : GParted 0.8.1 --enable-libparted-dmraid Libparted 2.3 Format /dev/sda1 as ext2 00:00:02 ( ERROR ) calibrate /dev/sda1 00:00:00 ( SUCCESS ) path: /dev/sda1 start: 2048 end: 62500863 size: 62498816 (29.80 GiB) set partition type on /dev/sda1 00:00:02 ( SUCCESS ) new partition type: ext2 create new ext2 file system 00:00:00 ( ERROR ) mkfs.ext2 -L "" /dev/sda1 mke2fs 1.41.14 (22-Dec-2010) /dev/sda1 is mounted; will not make a filesystem here! How is possible to correct this behaviour ? Is this due to some lacking option in clonezilla phase ? TIA Enrico

    Read the article

  • WLS Console Timeout

    - by john.graves(at)oracle.com
    The WebLogic console timeout is a great feature for security, yet a horrible feature during development.  Logging in over and over again gets to be annoying.  This is very easy to change, but I would never do this on a production system!   Find the WebLogic consoleapp weblogic.xml file.  This is typically in your WL_HOME/server/lib/consoleapp/webapp/WEB-INF/ directory. Edit the weblogic.xml file: Update the section shown and increase the timeout-secs.  I just throw an extra zero at the end giving me ten full hours of fun!!!: <session-descriptor> <timeout-secs>36000</timeout-secs> <invalidation-interval-secs>60</invalidation-interval-secs> <cookie-name>ADMINCONSOLESESSION</cookie-name> <cookie-max-age-secs>-1</cookie-max-age-secs> <cookie-http-only>false</cookie-http-only> <url-rewriting-enabled>false</url-rewriting-enabled> </session-descriptor> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; }

    Read the article

  • How to create an extensible rope in Box2D?

    - by Thomas
    Let's say I'm trying to create a ninja lowering himself down a rope, or pulling himself back up, all whilst he might be swinging from side to side or hit by objects. Basically like http://ninja.frozenfractal.com/ but with Box2D instead of hacky JavaScript. Ideally I would like to use a rope joint in Box2D that allows me to change the length after construction. The standard Box2D RopeJoint doesn't offer that functionality. I've considered a PulleyJoint, connecting the other end of the "pulley" to an invisible kinematic body that I can control to change the length, but PulleyJoint is more like a rod than a rope: it constrains maximum length, but unlike RopeJoint it constrains the minimum as well. Re-creating a RopeJoint every frame using a new length is rather inefficient, and I'm not even sure it would work properly in the simulation. I could create a "chain" of bodies connected by RotationJoints but that is also less efficient, and less robust. I also wouldn't be able to change the length arbitrarily, but only by adding and removing a whole number of links, and it's not obvious how I would connect the remainder without violating existing joints. This sounds like something that should be straightforward to do. Am I overlooking something?

    Read the article

  • NetBeans, JSF, and MySQL Primary Keys using AUTO_INCREMENT

    - by MarkH
    I recently had the opportunity to spin up a small web application using JSF and MySQL. Having developed JSF apps with Oracle Database back-ends before and possessing some small familiarity with MySQL (sans JSF), I thought this would be a cakewalk. Things did go pretty smoothly...but there was one little "gotcha" that took more time than the few seconds it really warranted. The Problem Every DBMS has its own way of automatically generating primary keys, and each has its pros and cons. For the Oracle Database, you use a sequence and point your Java classes to it using annotations that look something like this: @GeneratedValue(strategy=GenerationType.SEQUENCE, generator="POC_ID_SEQ") @SequenceGenerator(name="POC_ID_SEQ", sequenceName="POC_ID_SEQ", allocationSize=1) Between creating the actual sequence in the database and making sure you have your annotations right (watch those typos!), it seems a bit cumbersome. But it typically "just works", without fuss. Enter MySQL. Designating an integer-based field as PRIMARY KEY and using the keyword AUTO_INCREMENT makes the same task seem much simpler. And it is, mostly. But while NetBeans cranks out a superb "first cut" for a basic JSF CRUD app, there are a couple of small things you'll need to bring to the mix in order to be able to actually (C)reate records. The (RUD) performs fine out of the gate. The Solution Omitting all design considerations and activity (!), here is the basic sequence of events I followed to create, then resolve, the JSF/MySQL "Primary Key Perfect Storm": Fire up NetBeans. Create JSF project. Create Entity Classes from Database. Create JSF Pages from Entity Classes. Test run. Try to create record and hit error. It's a simple fix, but one that was fun to find in its completeness. :-) Even though you've told it what to do for a primary key, a MySQL table requires a gentle nudge to actually generate that new key value. Two things are needed to make the magic happen. First, you need to ensure the following annotation is in place in your Java entity classes: @GeneratedValue(strategy = GenerationType.IDENTITY) All well and good, but the real key is this: in your controller class(es), you'll have a create() function that looks something like this, minus the comment line and the setId() call in bold red type:     public String create() {         try {             // Assign 0 to ID for MySQL to properly auto_increment the primary key.             current.setId(0);             getFacade().create(current);             JsfUtil.addSuccessMessage(ResourceBundle.getBundle("/Bundle").getString("CategoryCreated"));             return prepareCreate();         } catch (Exception e) {             JsfUtil.addErrorMessage(e, ResourceBundle.getBundle("/Bundle").getString("PersistenceErrorOccured"));             return null;         }     } Setting the current object's primary key attribute to zero (0) prior to saving it tells MySQL to get the next available value and assign it to that record's key field. Short and simple…but not inherently obvious if you've never used that particular combination of NetBeans/JSF/MySQL before. Hope this helps! All the best, Mark

    Read the article

  • Usage of repository between EF model and code consumer

    - by jim
    I have binary data in my database that I'll have to convert to bitmap at some point. I was thinking whether or not it's appropriate to use a repository and do it there. My consumer, which is a presentation layer, will use this repository. For example: // This is a class I created for modeling the item as is. public class RealItem { public string Name { get; set; } public Bitmap Image { get; set; } } public abstract class BaseRepository { //using Unity (http://unity.codeplex.com) to inject the dependancy of entity context. [Dependency] public Context { get; set; } } public calss ItemRepository : BaseRepository { public List<Items> Select() { IEnumerable<Items> items = from item in Context.Items select item; List<RealItem> lst = new List<RealItem>(); foreach(itm in items) { MemoryStream stream = new MemoryStream(itm.Image); Bitmap image = (Bitmap)Image.FromStream(stream); RealItem ritem = new RealItem{ Name=item.Name, Image=image }; lst.Add(ritem); } return lst; } } Is this a correct way to use the repository pattern? I'm learning this pattern and I've seen a lot of examples online that are using a repository but when I looked at their source code... for example: public IQueryable<object> Select { return from q in base.Context select q; } as you can see no behavior is added to the system by their approach, so I was confused that maybe repository is something else and I got it all wrong. At the end there should be extra benifits of using them right?

    Read the article

  • Defaulting the HLSL Vertex and Pixel Shader Levels to Feature Level 9_1 in VS 2012

    - by Michael B. McLaughlin
    I love Visual Studio 2012. But this is not a post about that. This is a post about tweaking one particular parameter that I’ve found a bit annoying. Disclaimer: You will be modifying important MSBuild files. If you screw up you will break your build tools. And maybe your computer will catch fire. I’m not responsible. No warranties or guaranties of any sort. This info is provided “as is”. By default, if you add a new vertex shader or pixel shader item to a project, it will be set to build with shader profile 4.0_level_9_3. If you need 9_3 functionality, this is all well and good. But (especially for Windows Store apps) you really want to target the lowest shader profile possible so that your game will run on as many computers as possible. So it’s a good idea to default to 9_1. To do this you could add in new HLSL files via “Add->New Item->Visual C++->HLSL->______ Shader File (.hlsl)” and then edit the shader files’ properties to set them manually to use 9_1 via “Properties->HLSL Compiler->General->Shader Model”. This is fine unless you forget to do this once and then submit your game with 9_3 shaders instead of 9_1 shaders to the Windows Store or to some other game store. Then you’d wind up with either rejection or angry “this doesn’t work on my computer! ripoff!” messages. There’s another option though. In “Program Files (x86)\Microsoft Visual Studio 11.0\Common7\IDE\ItemTemplates\VC\HLSL\1033\VertexShader” (note the path might vary slightly for you if you are using a 32-bit system or have a non-ENU version of Visual Studio 2012) you will find a “VertexShader.vstemplate” file. If you open this file in a text editor (e.g. Notepad++), then inside the CustomParameters tag within the TemplateContent tag you should see a CustomParameter tag for the ShaderType, i.e.: <CustomParameter Name="$ShaderType$" Value="Vertex"/> On a new line, we are going to add another CustomParameter tag to the CustomParameters tag. It will look like this: <CustomParameter Name="$ShaderModel$" Value="4.0_level_9_1"/> such that we now have:     <CustomParameters>       <CustomParameter Name="$ShaderType$" Value="Vertex"/>       <CustomParameter Name="$ShaderModel$" Value="4.0_level_9_1"/>     </CustomParameters> You can then save the file (you will need to be an Administrator or have Administrator access). Back in the 1033 directory (or whatever the number is for your language), go into the “PixelShader” directory. Edit the “PixelShader.vstemplate” file and make the same change (note that this time $ShaderType$ is “Pixel” not “Vertex”; you shouldn’t be changing that line anyway, but if you were to just copy and replace the above four lines then you will wind up creating pixel shaders that the HLSL compiler would try to compile as vertex shaders, with all sort of weird errors as a result). Once you’ve added the $ShaderModel$ line to “PixelShader.vstemplate” and have saved it, everything should be done. Since Feature Level 9_1 and 9_3 don’t support any of the other shader types, those are set to default to their appropriate minimums already (Compute and Geometry are set to “4.0” and Domain and Hull are set to “5.0”, which are their respective minimums (though not all 4.0 cards support Compute shaders; they were an optional feature added with DirectX 10.1 and only became required for DirectX 11 hardware). In case you are wondering where these magic values come from, you can find them all in the “fxc.xml” file in the “\Program Files (x86)\MSBuild\Microsoft.CPP\v4.0\V110\1033” directory (or whatever your language number is; 1033 is ENU and various other product languages have their own respective numbers (see: http://msdn.microsoft.com/en-us/goglobal/bb964664.aspx ) such that Japanese is 1041 (for example), though for all I know MSBuild tasks might be 1033 for everyone). If, like me, you installed VS 2012 to a drive other than the C:\ drive, you will find the vstemplate files in the drive to which you installed VS 2012 (D:\ in my case) but you will find the fxc.xml file on the C:\ drive. You should not edit fxc.xml. You will almost definitely break things by doing that; it’s just something you can look through to see all the other options that the FXC task takes such that you could, if needed, add further CustomParameter tags if you wanted to default to other supported options. I haven’t tried any others though so I don’t have any advice on how to set them.

    Read the article

  • Managing the Domino Effect (with Tutor Publisher Reports)

    - by [email protected]
    When an organization upgrades their business application or improves a process, it triggers changes that will reverberate throughout an organization, like a falling row of dominoes standing on end. A tangible and repeatable way to communicate change is with updated process documentation. But how do organizations get their arms around all the documents that are impacted by an application upgrade or process improvement? A small change in one place will trigger subsequent changes in other areas. A simple domino chain of questions can go like this. What screens have changed? Do the new screens change the process in place? In what procedural documents are the screens referenced? Who uses the screens and must be notified of the changes? What other documents are affected? Will the change affect current company policy? Tutor Publisher compiles focused, easy to read impact analysis reports of your process documentation library that answer these tough questions. Tutor reports make it easy to quickly target the information and documents that require updating. In turn, the updated documents are used to communicate the change. The Tutor writing methodology and Publisher reports provide organizations the means to confidently keep documentation in sync with the way the business runs. Start managing the domino effect in your organization. Get a grip on it here!

    Read the article

  • Software design of a browser-based strategic MMO game

    - by Mehran
    I wonder if there are any known tested software designs for Travian-like browser-based strategic MMO games? I mean how would they implement the server of such games or what is stored in database and what is stored in RAM? Is the state of the world stored in one piece or is it distributed among a number of storage? Does anyone know a resource to study the problems and solutions of creating such games? [UPDATE] Suggested in comments, I'm going to give an example how would I design such a project. Even though I'm not sure if I'm proposing the right one. Having stored the world state in a MongoDB, I would implement an event collection in which all the changes to the world will register. Changes that are meant to happen in the future will come with an action date set to the future and those that are to be carried out immediately will be set to now. Having this datastore as the central point of the system, players will issue their actions as events inserted in datastore. At the other end of the system, I'll have a constant-running software taking out events out of the datastore which are due to be carried out and not done yet. Executing an event means apply some update on the world's state and thus the datastore. As scalable as this design sounds, I'm not sure if it will be worth implementing. For one, it is pointless to cache the datastore as most of updates happen once without any follow ups. For instance if you have the growth of resources in your game, you'll be updating the whole world state periodically in which case, having incorporated a cache, you are keeping the whole world in RAM (which most likely is impossible). So can someone come up with a better design?

    Read the article

< Previous Page | 880 881 882 883 884 885 886 887 888 889 890 891  | Next Page >