Search Results

Search found 14924 results on 597 pages for 'selector performance'.

Page 500/597 | < Previous Page | 496 497 498 499 500 501 502 503 504 505 506 507  | Next Page >

  • Cheating on Technical Debt

    - by Tony Davis
    One bad practice guaranteed to cause dismay amongst your colleagues is passing on technical debt without full disclosure. There could only be two reasons for this. Either the developer or DBA didn’t know the difference between good and bad practices, or concealed the debt. Neither reflects well on their professional competence. Technical debt, or code debt, is a convenient term to cover all the compromises between the ideal solution and the actual solution, reflecting the reality of the pressures of commercial coding. The one time you’re guaranteed to hear one developer, or DBA, pass judgment on another is when he or she inherits their project, and is surprised by the amount of technical debt left lying around in the form of inelegant architecture, incomplete tests, confusing interface design, no documentation, and so on. It is often expedient for a Project Manager to ignore the build-up of technical debt, the cut corners, not-quite-finished features and rushed designs that mean progress is satisfyingly rapid in the short term. It’s far less satisfying for the poor person who inherits the code. Nothing sends a colder chill down the spine than the dawning realization that you’ve inherited a system crippled with performance and functional issues that will take months of pain to fix before you can even begin to make progress on any of the planned new features. It’s often hard to justify this ‘debt paying’ time to the project owners and managers. It just looks as if you are making no progress, in marked contrast to your predecessor. There can be many good reasons for allowing technical debt to build up, at least in the short term. Often, rapid prototyping is essential, there is a temporary shortfall in test resources, or the domain knowledge is incomplete. It may be necessary to hit a specific deadline with a prototype, or proof-of-concept, to explore a possible market opportunity, with planned iterations and refactoring to follow later. However, it is a crime for a developer to build up technical debt without making this clear to the project participants. He or she needs to record it explicitly. A design compromise made in to order to hit a deadline, be it an outright hack, or a decision made without time for rigorous investigation and testing, needs to be documented with the same rigor that one tracks a bug. What’s the best way to do this? Ideally, we’d have some kind of objective assessment of the level of technical debt in a software project, although that smacks of Science Fiction even as I write it. I’d be interested of hear of any methods you’ve used, but I’m sure most teams have to rely simply on the integrity of their colleagues and the clear perceptions of the project manager… Cheers, Tony.

    Read the article

  • Oracle Manageability Presentations at Collaborate 2012

    - by Get_Specialized!
    Attending the Collaborate 2012 event, April 22-26th in Las Vegas, and interested in learning more about becoming specialized on Oracle Manageability? Be sure and checkout these sessions below presented by subject matter experts while your onsite. Set up a meeting or be one of the first Oracle Partners onsite to ask me, and we'll request one of the limited FREE Oracle Enterprise Manager 12c partner certification exam vouchers for you. Can't travel this year? the  COLLABORATE 12 Plug Into Vegas may be another option for you to attend from your own desk presentations like session #489 Oracle Enterprise Manager 12c: What's Changed? What's New? presented by Oracle Specialized Partners like ROLTA   Session ID Title Presented by Day/Time 920 Enterprise Manager 12c Cloud Control: New Features and Best Practices Dell Sun 9536 Release 12 Apps DBA 101 Justadba, LLC Mon 932 Monitoring Exadata with Cloud Control Oracle Mon 397 OEM Cloud Control Hands On Performance Tuning Mon 118 Oracle BI Sys Mgmt Best Practices & New Features Rittman Mead Consulting Mon 548 High Availability Boot Camp: RAC Design, Install, Manage Database Administration, Inc Mon 926 The Only Complete Cloud Management Solution -- Oracle Enterprise Manager Oracle Mon 328 Virtualization Boot Camp Dell Mon 292 Upgrading to Oracle Enterprise Manager 12c - Best Practices Southern Utah University Mon 793 Exadata 101 - What You Need to Know Rolta Tues 431 & 1431 Extreme Database Administration: New Features for Expert DBAs Oracle Tue Wed 521 What's New for Oracle WebLogic Management: Capabilities that Scripting Cannot Provide Oracle Thu 338 Oracle Real Application Testing: A look under the hood PayPal Tue 9398 Reduce TCO Using Oracle Application Management Suite for Oracle E-Business Suite Oracle Tue 312 Configuring and Managing a Private Cloud with Oracle Enterprise Manager 12c Dell Tue 866 Making OEM Sing and Dance with EMCLI Portland General Electric Tue 533 Oracle Exadata Monitoring: Engineered Systems Management with Oracle Enterprise Manager Oracle Wed 100600 Optimizing EnterpriseOne System Administration Oracle Wed 9565 Optimizing EBS on Exadata Centroid Systems Wed 550 Database-as-a-Service: Enterprise Cloud in Three Simple Steps Oracle Wed 434 Managing Oracle: Expert Panel on Techniques and Best Practices Oracle Partners: Dell, Keste, ROLTA, Pythian Wed 9760 Cloud Computing Directions: Understanding Oracle's Cloud AT&T Wed 817 Right Cloud: Use Oracle Technologies to Avoid False Cloud Visual Integrator Consulting Wed 163 Forgetting something? Standardize your database monitoring environment with Enterprise Manager 11g Johnson Controls Wed 489 Oracle Enterprise Manager 12c: What's Changed? What's New? ROLTA Thu    

    Read the article

  • On the art of self-promotion

    - by Tony Davis
    I attended Brent Ozar's Building the Fastest SQL Servers session at Tech Ed last week, and found myself engulfed in a 'perfect storm' of excellent technical and presentational skills coupled with an astute awareness of the value of promoting one's work. I spend a lot of time at such events talking to developers and DBAs about the value of blogging and writing articles, and my impression is that some could benefit from a touch less modesty and a little more self-promotion. I sense a reticence in many would-be writers. Is what I have to say important enough? Haven't far more qualified and established commentators, MVPs and so on, already said it? While it's a good idea to pick reasonably fresh and interesting topics, it's more important not to let such fears lead to writer's block. In the eyes of any future employer, your published writing is an extension of your resume. They will not care that a certain MVP knows how to solve problem x, but they will be very interested to see that you have tackled that same problem, and solved it in your own way, and described the process in your own voice. In your current job, your writing is one of the ways you can express to your peers, and to the organization as a whole, the value of what you contribute. Many Developers and DBAs seem to rely on the idea that their work will speak for itself, and that their skill shines out from it. Unfortunately, this isn't always true. Many Development DBAs, for example, will be painfully aware of the massive effort involved in tuning and adding resilience to rapidly developed applications. However, others in the organization who are unaware of what's involved in getting an application that is 'done' ready for production may dismiss such efforts as fussiness or conservatism. At the dark end of the development cycle, chickens come home to roost, but their droppings tend to land on those trying to clear up the mess. My advice is this: next time you fix a bug or improve the resilience or performance of a database or application, make sure that you use team meetings, informal discussions and so on to ensure that people understand what the problem was and what you had to do to fix it. Use your blog to describe, generally, the process you adopted, the resources you used and the insights that came from your work. Encourage your colleagues to do the same. By spreading the art of self-promotion to everyone involved in an IT project, we get a better idea of the extent of the work and the value of the contribution of all the team members. As always, we'd love to hear what you think. This very week, Simple-talk launches its new blogging platform. If any of this has moved you to 'throw your hat into the ring', drop us a mail at [email protected]. Cheers, Tony.

    Read the article

  • On the art of self-promotion

    - by Tony Davis
    I attended Brent Ozar’s Building the Fastest SQL Servers session at Tech Ed last week, and found myself engulfed in a ‘perfect storm’ of excellent technical and presentational skills coupled with an astute awareness of the value of promoting one’s work. I spend a lot of time at such events talking to developers and DBAs about the value of blogging and writing articles, and my impression is that some could benefit from a touch less modesty and a little more self-promotion. I sense a reticence in many would-be writers. Is what I have to say important enough? Haven’t far more qualified and established commentators, MVPs and so on, already said it? While it’s a good idea to pick reasonably fresh and interesting topics, it’s more important not to let such fears lead to writer’s block. In the eyes of any future employer, your published writing is an extension of your resume. They will not care that a certain MVP knows how to solve problem x, but they will be very interested to see that you have tackled that same problem, and solved it in your own way, and described the process in your own voice. In your current job, your writing is one of the ways you can express to your peers, and to the organization as a whole, the value of what you contribute. Many Developers and DBAs seem to rely on the idea that their work will speak for itself, and that their skill shines out from it. Unfortunately, this isn’t always true. Many Development DBAs, for example, will be painfully aware of the massive effort involved in tuning and adding resilience to rapidly developed applications. However, others in the organization who are unaware of what’s involved in getting an application that is ‘done’ ready for production may dismiss such efforts as fussiness or conservatism. At the dark end of the development cycle, chickens come home to roost, but their droppings tend to land on those trying to clear up the mess. My advice is this: next time you fix a bug or improve the resilience or performance of a database or application, make sure that you use team meetings, informal discussions and so on to ensure that people understand what the problem was and what you had to do to fix it. Use your blog to describe, generally, the process you adopted, the resources you used and the insights that came from your work. Encourage your colleagues to do the same. By spreading the art of self-promotion to everyone involved in an IT project, we get a better idea of the extent of the work and the value of the contribution of all the team members. As always, we’d love to hear what you think. This very week, Simple-talk launches its new blogging platform. If any of this has moved you to ‘throw your hat into the ring’, drop us a mail at [email protected]. Cheers, Tony.

    Read the article

  • Oracle OpenWorld Call for MDM Papers

    - by david.butler(at)oracle.com
    As the MDM Track owner, I would like to invite everyone to respond to the Oracle OpenWorld (October 2-6, Moscone Center, San Francisco) Call for Papers (https://oracleus.wingateweb.com/portal/cfp/ ). The Call for Papers is open now through Sunday, March 27. This is an outstanding opportunity for organizations familiar with MDM to tell their story to a very large, knowledgeable and intensely interested community. Opportunities for feedback and networking abound.  I would love to see MDM papers on: business drivers; business benefits; quantified ROI stories; business process optimization; implementation styles; implementation lessons learned; using master data as a service; data governance best practices; end-to-end data quality experiences; support for SOA; Chart of Accounts issues fixed; how to leverage reference data; improving EPM and/or BI across the board; operationalizing a data warehouse; support for cloud computing; compliance success stories; architecture, scalability, and mixed workload RAC platform performance examples; industry specific value propositions (Financial Services; Retail, Telecom; Manufacturing, High Tech Manufacturing, Public Sector, Health Care, …); and line of business specific value propositions (CRM, ERP, PLM, SCM, …); etc. In fact, given that MDM positively impacts all areas of operations and analytics, there are no limits to the ideas you may have for an OpenWorld presentation. When you follow the submission process, be sure to use “Master Data Management” for either the Primary or Optional track. Add “Master Data Management” as an Optional track if you are adding MDM content to a presentation on one of the following tracks: Agile; Customer Relationship Management, Oracle E-Business Suite, Product Lifecycle Management, Siebel, Sourcing and Procurement, Supply Chain Management, or one of the 18 available industry tracks. If Cloud Computing is included, please add “Cloud Computing” as a Cross-Stream Track. And don’t forget to make “MDM” a Tag, along with Business Intelligence, Cloud, CRM, Data Integration, Data Migration, Data Warehousing, EPM, or Service-Oriented Architecture whenever your content includes these items. I will personally review each submission. I hope you all keep me very busy over the next few weeks.

    Read the article

  • Started wrong with a project. Should I start over?

    - by solidsnake
    I'm a beginner web developer (one year of experience). A couple of weeks after graduating, I got offered a job to build a web application for a company whose owner is not much of a tech guy. He recruited me to avoid theft of his idea, the high cost of development charged by a service company, and to have someone young he can trust onboard to maintain the project for the long run (I came to these conclusions by myself long after being hired). Cocky as I was back then, with a diploma in computer science, I accepted the offer thinking I can build anything. I was calling the shots. After some research I settled on PHP, and started with plain PHP, no objects, just ugly procedural code. Two months later, everything was getting messy, and it was hard to make any progress. The web application is huge. So I decided to check out an MVC framework that would make my life easier. That's where I stumbled upon the cool kid in the PHP community: Laravel. I loved it, it was easy to learn, and I started coding right away. My code looked cleaner, more organized. It looked very good. But again the web application was huge. The company was pressuring me to deliver the first version, which they wanted to deploy, obviously, and start seeking customers. Because Laravel was fun to work with, it made me remember why I chose this industry in the first place - something I forgot while stuck in the shitty education system. So I started working on small projects at night, reading about methodologies and best practice. I revisited OOP, moved on to object-oriented design and analysis, and read Uncle Bob's book Clean Code. This helped me realize that I really knew nothing. I did not know how to build software THE RIGHT WAY. But at this point it was too late, and now I'm almost done. My code is not clean at all, just spaghetti code, a real pain to fix a bug, all the logic is in the controllers, and there is little object oriented design. I'm having this persistent thought that I have to rewrite the whole project. However, I can't do it... They keep asking when is it going to be all done. I can not imagine this code deployed on a server. Plus I still know nothing about code efficiency and the web application's performance. On one hand, the company is waiting for the product and can not wait anymore. On the other hand I can't see myself going any further with the actual code. I could finish up, wrap it up and deploy, but god only knows what might happen when people start using it. What do you think I should do?

    Read the article

  • Stop trying to be perfect

    - by Kyle Burns
    Yes, Bob is my uncle too.  I also think the points in the Manifesto for Software Craftsmanship (manifesto.softwarecraftsmanship.org) are all great.  What amazes me is that tend to confuse the term “well crafted” with “perfect”.  I'm about to say something that will make Quality Assurance managers and many development types as well until you think about it as a craftsman – “Stop trying to be perfect”. Now let me explain what I mean.  Building software, as with building almost anything, often involves a series of trade-offs where either one undesired characteristic is accepted as necessary to achieve another desired one (or maybe stave off one that is even less desirable) or a desirable characteristic is sacrificed for the same reasons.  This implies that perfection itself is unattainable.  What is attainable is “sufficient” and I think that this really goes to the heart both of what people are trying to do with Agile and with the craftsmanship movement.  Simply put, sufficient software drives the greatest business value.   I've been in many meetings where “how can we keep anything from ever going wrong” has become the thing that holds us in analysis paralysis.  I've also been the guy trying way too hard to perfect some function to make sure that every edge case is accounted for.  Somewhere in there, something a drill instructor said while I was in boot camp occurred to me.  In response to being asked a question by another recruit having to do with some edge case (I can barely remember the context), he said “What if grasshoppers had machine guns?  Would the birds still **** with them?”  It sounds funny, but there's a lot of wisdom in those words.   “Sufficient” is different for every situation and it’s important to understand what sufficient means in the context of the work you’re doing.  If I’m writing a timesheet application (and please shoot me if I am), I’m going to have a much higher tolerance for imperfection than if you’re writing software to control life support systems on spacecraft.  I’m also likely to have less need for high volume performance than if you’re writing software to control stock trading transactions.   I’d encourage anyone who has read this far to instead of trying to be perfect, try to create software that is sufficient in every way.  If you’re working to make a component that is sufficient “better”, ask yourself if there is any component left that is not yet sufficient.  If the answer is “yes” you’re working on the wrong thing and need to adjust.  If the answer is “no”, why aren’t you shipping and delivering business value?

    Read the article

  • ADF version of "Modern" dialog windows

    - by Martin Deh
    It is no surprise with the popularity of the i-devices (iphone, ipad), that many of the iOS UI based LnF (look and feel) would start to inspire web designers to incorporate the same LnF into their web sites.  Take for example, a normal dialog popup.  In the iOS world, the LnF becomes a bit more elegant by add just a simple element as a "floating" close button: In this blog post, I will describe how this can be accomplished using OOTB ADF components and CSS3 style elements. There are two ways that this can be achieved.  The easiest way is to simply replace the default image, which looks like this, and adjust the af|panelWindow:close-icon-style skin selector.   Using this simple technique, you can come up with this: The CSS code to produce this effect is pretty straight forward: af|panelWindow.test::close-icon-style{    background-image: url("../popClose.gif");    line-height: 10px;    position: absolute;    right: -10px;    top: -10px;    height:38px;    width:38px;    outline:none; } You can see from the CSS, the position of the region, which holds the image, is relocated based on the position based attributes.  Also, the addition of the "outline" attribute removes the border that is visible in Chrome and IE.  The second example, is based on not having an image to produce the close button.  Like the previous sample, I will use the OOTB panelWindow.  However, this time I will use a OOTB commandButton to replace the image.  The construct of the components looks like this: The commandButton is positioned first in the hierarchy making the re-positioning easier.  The commandButton will also need a style class assigned to it (i.e. closeButton), which will allow for the positioning and the over-riding of the default skin attributes of a default button.  In addition, the closeIconVisible property is set to false, since the default icon is no longer needed.  Once this is done, the rest is in the CSS.  Here is the sample that I created that was used for an actual customer POC: The CSS code for the button: af|commandButton.closeButton, af|commandButton.closeButton af|commandButton:text-only{     line-height: 10px;     position: absolute;     right: -10px;     top: -10px;     -webkit-border-radius: 70px;     -moz-border-radius: 70px;     -ms-border-radius: 70px;     border-radius: 70px;     background-image:none;     border:#828c95 1px solid;     background-color:black;     font-weight: bold;     text-align: center;     text-decoration: none;     color:white;     height:30px;     width:30px;     outline:none; } The CSS uses the border radius to create the round effect on the button (in IE 8, since border-radius is not supported, this will only work with some added code). Also, I add the box-shadow attribute to the panelWindow style class to give it a nice shadowing effect.

    Read the article

  • Oracle HCM Cloud Customer Q&A with WAXIE Sanitary Supply

    - by HCM-Oracle
    At this year’s Oracle HCM User Group (OHUG) Global conference, we had the opportunity to sit down with Oracle HCM Cloud customers for a short Q&A. We got to hear about what brought them to the OHUG conference, some of the benefits they are receiving from their Oracle HCM Cloud solutions, and advice they would give other businesses looking to move to the cloud.  Below is a discussion we had with Melissa Halverson, Benefits & HRIS Manager at WAXIE Sanitary Supply.  Q: What made you attend the OHUG Global Conference this year? Halverson: The biggest reason is networking. It allows me to connect with others in the Oracle HCM Cloud community. I was able to speak at the HCM Cloud SIG (Special Interest Group) on the first day and share my experiences as well as hear the experiences of other Oracle HCM Cloud users. It also allows me to get face-time with key people within Oracle.  Q: What Oracle HCM solutions are you currently using? Halverson: Global HR, Benefits, Workforce Compensation, and Performance Management. Q: Do you plan to invest further in Oracle HCM? Halverson: Yes, we are interested in Time and Labor. We would also like to get Recruiting at some point in the future. Q: What would you say is the most significant benefit you’ve realized from your use of Oracle HCM solutions? Halverson: First and foremost would be process improvement. Before we had Oracle HCM Cloud we relied on a paper process where something as simple as an employee address change required changes to be made manually in 9 different systems. Obviously that was extremely inefficient, but also increased the likelihood of errors being made.  The other huge benefit we have seen was in making information visible to the people that need it. Prior to implementing Oracle HCM Cloud, it was very difficult for anyone to access and make use of the information in our systems. Now, we can provide this information to those who need it to make better decisions.  Q: What advice would you give an organization looking to move their HR systems to the cloud? Halverson: One thing I think many organizations don't spend enough time doing is thoroughly vetting their implementation partner. I believe you should be vetting your implementation partner as much as you did the system itself. Also, manpower is so important. Involve as large a team as possible because you don’t want to get stuck having too few bodies to help out. And set realistic time frames. Biting off more than you can chew will inevitably result in failure. Having a phased approach is always best rather than trying to do everything at once. Thanks for the tips Melissa. Enjoy the rest of the conference!

    Read the article

  • #SSAS #Tabular Workshop and Community Events in Netherlands and Denmark

    - by Marco Russo (SQLBI)
    Next week I will finally start the roadshow of the SSAS Tabular Workshop, a 2-day seminar about the new BISM Tabular model for Analysis Services that has been introduced in SQL Server 2012. During these roadshows, we always try to arrange some speeches at local community events in the evening - we already defined for Copenhagen, we have some logistic issue in Amsterdam that we're trying to solve. Here is the timetable: Netherlands SSAS Workshop in Amsterdam, NL – April 16-17, 2012 2-day seminar, I and Alberto will be the trainers for this event, register here We're trying to manage a Community event but we still don't have a confirmation, stay tuned        Denmark SSAS Workshop in Copenhagen, DK – April 26-27, 2012 2-day seminar, I and Alberto will be the trainers for this event, register here Community event on April 26, 2012 This event will run in Hellerup, at Microsoft venue All details available here: http://msbip.dk/events/26/msbip-mode-nr-5/ People from Sweden are welcome! Just register to this private group on LinkedIn in order to announce your presence, so we’ll know how many people will attend In community events we’ll deliver two speeches – here are the descriptions: Inside xVelocity (VertiPaq) PowerPivot and BISM Tabular models in Analysis Services share a great columnar-based database engine called xVelocity in-memory analytics engine (VertiPaq). If you want to improve performance and optimize memory used, you have to understand some basic principles about how this engine works, how data is compressed, and how you can design a data model for better optimization. Prepare yourself to change your mind. xVelocity optimization techniques might seem counterintuitive and are absolutely different than OLAP and SQL ones! Choosing between Tabular and Multidimensional You have a new project and you have to make an important decision upfront. Should you use Tabular or Multidimensional? It is not easy to answer, because sometime there is a clear choice, but most of the times both decisions might be correct, at least at the beginning. In this session we’ll help you making an informed decision, correctly evaluating pros and cons of each one according to common scenarios, considering both short-term and long-term consequences of your choice. I hope to meet many people in this first dates. We have many other events coming in May and June, including an online event (for US time zones), and you can also attend our PreCon Day at TechEd US in Orland (PRC06) or TechEd Europe in Amsterdam. I’ll be a good customer for airline companies in the next three months! I’m just sorry that I hadn’t time to write other articles in the last month, but I’m accumulating material that I will need to write down during some flight – stay tuned…

    Read the article

  • Drawing lots of tiles with OpenGL, the modern way

    - by Nic
    I'm working on a small tile/sprite-based PC game with a team of people, and we're running into performance issues. The last time I used OpenGL was around 2004, so I've been teaching myself how to use the core profile, and I'm finding myself a little confused. I need to draw in the neighborhood of 250-750 48x48 tiles to the screen every frame, as well as maybe around 50 sprites. The tiles only change when a new level is loaded, and the sprites are changing all the time. Some of the tiles are made up of four 24x24 pieces, and most (but not all) of the sprites are the same size as the tiles. A lot of the tiles and sprites use alpha blending. Right now I'm doing all of this in immediate mode, which I know is a bad idea. All the same, when one of our team members tries to run it, he gets very bad frame rates (~20-30 fps), and it's much worse when there are more tiles, especially when a lot of those tiles are the kind that are cut into pieces. This all makes me think that the problem is the number of draw calls being made. I've thought of a few possible solutions to this, but I wanted to run them by some people who know what they're talking about so I don't waste my time on something stupid: TILES: When a level is loaded, draw all the tiles once into a frame buffer attached to a big honking texture, and just draw a big rectangle with that texture on it every frame. Put all the tiles into a static vertex buffer when the level is loaded, and draw them that way. I don't know if there's a way to draw objects with different textures with a single call to glDrawElements, or if this is even something I'd want to do. Maybe just put all the tiles into a big giant texture and use funny texture coordinates in the VBO? SPRITES: Draw each sprite with a separate call to glDrawElements. Use a dynamic VBO somehow. Same texture question as number 2 above. Point sprites? This is probably silly. Are any of these ideas sensible? Is there a good implementation somewhere I could look over?

    Read the article

  • What to use C++ for?

    - by futlib
    I really love C++. However, I'm struggling to find good uses for it lately. It is still the language to use if you're building huge systems with huge performance requirements. Like backend/infrastructure code at Google and Facebook, or high-end games. But I don't get to do stuff like that. It's also a good choice for code that runs close to the hardware. I'd like to do more low-level stuff, but it isn't part of my job, and I can't think of useful private projects that would involve that. Traditionally, C++ was also a good choice for rich client applications, but those are mostly written in C# and Obj-C lately - and aren't really that important anymore, with everything being a web app. Or a mobile app, which are mostly written in Obj-C and Java. And of course, web-based desktop and mobile apps are quite prominent, too. At my job, I work mostly on web applications, using Java, JavaScript and Groovy. Java is a good/popular choice for non-Google-scale backends, Groovy (or Python, or Ruby or Node.js) is pretty good for the server-side of web apps and JavaScript is the only real choice for the client-side. Even the little games I'm writing in my spare time are lately mostly written in JavaScript, so they can run in the browser. So what would you suggest I could use C++ for? I'm aware that this question is very similar. However, I don't want to learn C++, I was a professional C++ programmer for years. I want to keep doing it and find good new use cases for it. I know that I can use C++ for web apps/games. I could even compile C++ to JavaScript with Emscripten. However, it doesn't seem like a good idea. I'm looking for something C++ is really good at to stay competent in the language. If your answer is: Just give up and forget C++, you'll probably never need it again, so be it.

    Read the article

  • Managing Joomla via Android

    Surprisingly, it was only today that I actually looked for possible solutions to write more content for my blog. Since quite some time I'm using my Samsung Galaxy Tab 10.1 for all kind of social media activities like Google+, FB, etc. but also for my casual mail during the evening hours. And yes, I feel a little bit guilty about missing the chance to use my tablet to write some content here... OK, only a little bit. ;-) These are not the droids you are looking for But those lazy times are over! While searching the Play Store with the expression 'joomla' I got three interesting hits: - Joomla Admin Mobile! - Joooid - Joomla! Security Checklist After reading the reviews I installed the two later apps. Joomla! Security Checklist The author clearly outlines here that the app is primarily for his personal purpose to have safety checklist at hand at anytime. I guess that any reader of this article has an Android based smartphone or tablet, so that simple app should be part of your toolbox when using Joomla! for your websites. Joooid plugin & app Although I was looking for an app that could work with the default XML RPC interface of Joomla I have to admit that this combination of an enhanced Web service suits me better, mainly due to performance reason. The official website has not only the downloads for Joomla versions 1.5 - 2.5 but also very good and easy to follow step-by-step instructions to prepare your server for the Android app. It will take you less than 5 minutes to get it up and running. For safety reasons, I recommend that you should configure your Web server to have an additional authentication layer on the plugins folder. The smartphone app has the ability to run against HTTP authentication. Personally, I like the look and feel of the app. It is a little bit different compared to the web UI but still easy to use. In fact, this article is the first one written in the Joooid app. At the moment, I only miss the ability to have list tags. Quick and easy Writing full-fledged articles with images, a couple of hyperlinks and some styling here and there should be left to the desktop. At least for the moment. Let's see whether I'm going to change my mind on this during the upcoming months... I'll give it a try, and hope to publish at least once per month to write some content using Joooid. Actually, it would be great to have some feedback about other Joomla! clients in the wild.

    Read the article

  • BIP 10.1.3.4.x June 2010 Update Available

    - by Tim Dexter
    A new patchset for 10.1.3.4.0 and 10.1.3.4.1 is available on Metalink. some notes: The patch number is 9791839. This patchset includes 28 new bug fixes since the last patchset release on March 31. This is a culmulative update that includes all the fixes and enhancements from previous updates. The patch will supercede the other two updates. Install instructions are in the readme inside the patch There is also a new BIP client patch available, 9821068. No new template building features to my knowledge but there is an update to the template viewer to allow you to test and debug you siny new Excel templates. Server 8529759XMLP_TEMPLATE_DESIGNER CANNOT SAVE / UPLOAD TEMPLATE 8566455 BI PUBLISHER SCHEDULER DOES NOT START WITH JNDI DATA SOURCE 9295667RESPONSE OF GETSCHEDULEDREPORTINFO RETURNS STATUS AS 'UNKNOWN' INSTEAD OF 'SCHED 9542413 UNABLE TO CREATE A NEW TEMPLATE FROM UI 9546137 EXCEL ANALYZER TEMPLATE FAILS FOR A STRUCTURED XML WHEN IT IS UPLOADED 9556338 SIEBEL - BIP PARAMETERS SORT ORDER 9560562 BI PUBLISHER CACHE DIRECTORY FILLING UP AND POINTING TO INVALID DIRECTORY 9646599 USER ROLE DEFINED AS PRIMARYGROUP IN ACTIVEDIRECTORY GROUP ARE NOT RECOGNIZED 9664768 ER: NEED TO BIND USER ATTRIBUTE VALUES DEFINED IN ACTIVEDIRECTORY IN DATA QUERY 9665075 BI PUBLISHER AFTER 9546699 NOTIFICATIONS FOR REPORTS FAIL 9669973 ER: NEED TO SUPPORT PRE-PROCESSING XML WITH XSL FOR EXCEL TEMPLATE 9704401 ER: NEED TO SUPPORT DEFAULT GROUP FOR ALL USERS IN LDAP/AD SECURITY 9711899 SEARCH PARAMETER IS NOT VISIBLE WHEN SCHEDULE A REPORT 9753736 SOME ROLES FROM ACTIVEDIRECTORY ARE NOT LISTED IN ADMIN ROLE-FOLDER MAPPING 9771354 MULTIPLE PARAMETERS IN 10.1.3.4.1 DATA TEMPLATE ACT ACT DIFFERENTLY FROM 10.1.3. 9772982 "REFRESH OTHER PARAMETERS ON CHANGE" DOESN'T WORK PROPERLY Core  8599646 ER:EXTRA SPACE ADDED BELOW IMAGE IN A TABLE CELL OF TEMPLATE IN FIREFOX 9377593 SOME ROWS HEIGHT IN HTML/EXCEL OUTPUT ARE TOO BIG IN BI PUBLISHER 9487030 NAVIGATION TREE REPEATING TWICE IN PDF DCCUMENT CREATED BY BI PUBLISHER 9509432 PERFORMANCE ISSUE WHEN USING PDF TEMPLATE 9534424 PS: DOCUMENT-REPEAT-FULLPATH-ELEMENTNAME SHOULDNT USE DOT "." AS PATH SEPARATOR 9553360 FORMPROCESSOR CANNOT PARSE SOME PDF TEMPLATES 9554959 TEXT IN AUTOSHAPE IS NOT PROPERLY CUT OFF FOR LINE WRAPPING 9569417 AFTER APPLYING PATCH 9509432 PDF TEMPLATES WITH DBDRV PRODUCE NO OUTPUT 9571670 ER: EXCEL TEMPLATE TO SUPPORT XSLT LOGIC AND XSL CUSTOM EXTENTIONS 9589809 XSL:CALL-TEMPLATE IS MISSING IN GENERATED XSL FILE 9605920 BOOKMARK TESTCASE FAILED DUE TO ER9283933 9689634 PRINT FLOW CHART USING ACROSS 3 DOWN 0 GIVES EXTRA BLANK PAGES You might have noticed some fixes and ehancements to the Excel templates so I can get back on those now. There is a part two to the Mapviewer BIP Mashup coming ... just need aanother 4 hours in the day to squeeze it in.

    Read the article

  • Oracle GoldenGate: Knowledge Document Series Post #2

    - by Doug Reid
    0 false 18 pt 18 pt 0 0 false false false /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:"Times New Roman"; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;} 0 false 18 pt 18 pt 0 0 false false false /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:"Times New Roman"; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;} For our second post in this series the team would like to highlight the knowledge document “How-To: Oracle GoldenGate – Heartbeat Process to Monitor Lag and Performance”. This knowledge document outlines a procedure to reliably measure lag between source and target systems through the use of 'heartbeat' tables. The basic idea is to have a table on the source system that gets updated at a predetermined interval. In your capture processes you would capture the update from the heartbeat table. Using tokens you would add some additional information to the heartbeat record to be able to tell which extract process was capturing the update. This additional information would be used downstream to calculate the real lag time between the source and target systems for a given extract and by checking the last update time on the heartbeat at the target you could also determine if data has stopped flowing between the source and target.  Click here to view the document

    Read the article

  • eSTEP Newsletter October 2012 now available

    - by uwes
    Dear Partners,We would like to inform you that the October '12 issue of our Newsletter is now available.The issue contains information to the following topics:News from CorpOracle Announces Oracle Solaris 11.1 at Oracle OpenWorld; Oracle Announces Oracle Exadata X3 Database In-Memory Machine; Oracle Enterprise Manager 12c introduces New Tools and Programs for Partners; Oracle Unveils First Industry-Specific Engineered System - the Oracle Networks Applications Platform,;  Oracle Unveils Expanded Oracle Cloud Offerings; Oracle Outlines Plans to Make the Future Java During JavaOne 2012 Strategy Keynote; Some interesting Java Facts and Figures; Oracle Announces MySQL 5.6 Release Candidate Technical Section What's up with LDoms (4 tech articles); Oracle SPARC T4 Systems cut Complexity, cost of Cryptographic Tasks; PeopleSoft Enterprise Financials 9.1; PeopleSoft HCM 9.1 combined online and batch benchmark,; Product Update Bulletin Oracle Solaris Cluster Oct 2012; Sun ZFS Storage 7420; SPARC Product Line Update; SPARC M-series -  New DAT 160 plus EOL of M3000 series; SPARC SuperCluster and SPARC T4 Servers Included in Enterprise Reference Architecture Sizing Tool; Oracle MagazineLearning & EventsRecently delivered Techcasts: An Update after the Oracle Open World, An Update on OVM Server for SPARC; Update to Oracle Database ApplianceReferencesBridgestone Aircraft Tire Reduces Required Disk Capacity by 50% with Virtualized Storage Solution; Fiat Group Automobiles Aligns Operational Decisions with Strategy by Using End-to-End Enterprise Performance Management System; Birkbeck, University of London Develops World-Class Computer Science Facilities While Reducing Costs with Ultrareliable and Scalable Data Infrastructure How toIntroducing Oracle System Assistant; How to Prepare a ZFS Storage Appliance to Serve as a Storage Device; Migrating Oracle Solaris 8 P2V with Oracle Database 10.2 and ASM; White paper on Best Practices for Building a Virtualized SPARC Computing Environment, How to extend the Oracle Solaris Studio IDE with NetBeans Plug-Ins; How I simplified Oracle Database 11g Installation on Oracle Linux 6You find the Newsletter on our portal under eSTEP News ---> Latest Newsletter. You will need to provide your email address and the pin below to get access. Link to the portal is shown below.URL: http://launch.oracle.com/PIN: eSTEP_2011Previous published Newsletters can be found under the Archived Newsletters section and more useful information under the Events, Download and Links tab. Feel free to explore and any feedback is appreciated to help us improve the service and information we deliver.Thanks and best regards,Partner HW Enablement EMEA

    Read the article

  • 6 Ways to Modernize Your Customer Experience

    - by Mike Stiles
    If customers have changed, if the way they research and shop have changed, if their expectations have changed, if their ability to act on dissatisfaction has changed, but your customer experience has NOT changed, what was once “good enough” may now be crippling. Well, the customer has changed, and why wouldn’t they? You’ve probably changed too in your role as consumer. There’s more info available, it’s easier to get, there’s more choice, you’re more mobile, you’re more connected, it’s easier to buy, and yes, it’s easier to switch brands if experiences don’t meet your now higher expectations. Thanks to technological advances, we as marketers can increasingly work borderline miracles. But if we’re still not adamantly adopting customer centricity, and if we aren’t making the customer experience paramount amongst business goals, the tech is wasted. A far more modern customer experience is called for. Here are 6 ways to get there: 1. Modern Marketing: Marketing data is aggregated and targeted to the right customers, who are getting personal, relevant communications. In return, you’re getting insight that finally properly attributes revenue to your marketing efforts. 2. Modern Selling: Demand is being driven across all channels with modern selling tools. Productivity is up thanks to coordinated communication and selling, and performance is ever optimized using powerful analytics. 3. Modern CPQ: You’re cross-selling and upselling more effectively since reps and channel partners have been empowered with the ability to quickly, automatically generate 100% accurate, customer-friendly quotes complete with price controls and automated approvals. 4. Modern Commerce: You’re leveraging data and delivering personalized, targeted digital experiences to everyone. You’re attracting more visitors, and you’re able to scale and keep up with the market and control the experience. 5. Modern Service: You’re better serving your customers by making it easier for them to engage with your brand, plus you’re lowering your costs by increasing agent and tech support efficiencies. 6. Modern Social: You’re getting faster, deeper, more accurate insights from social and turning content around faster, which then goes out to the right people at the right time in the right place. You’ve also gotten proactive in your service, and customers love that. For far too many brands, the buying journey of Need, Research, Select, Buy, Use, Recommend across the multiple connect points of Social, Mobile, Store, Call Center, Site, Ecommerce is a disconnected mess. Oracle’s approach to CX is to connect every interaction your customer has with your brand, avoiding the revenue losses lousy customer experiences bring. How important is the experience to customers? 94% are willing to pay more of their hard-earned money to have better ones, while a meager 1% say they get the good, consistent experiences they expect. Brands, your words aren’t as loud anymore, so your actions as they relate to customer experience are going to have to do the talking. @mikestiles @oraclesocialPhoto: Julien Tromeur, freeimages.com

    Read the article

  • Pay in the future should make you think in the present

    - by BuckWoody
    Distributed Computing - and more importantly “-as-a-Service” models of computing have a different cost model. This is something that sounds obvious on the surface but it’s often forgotten during the design and coding phase of a project. In on-premises computing, we’re used to purchasing a server and all of the hardware infrastructure and software licenses needed not only for one project, but several. This is an up-front or “sunk” cost that we consume by running code the organization needs to perform its function. Using a direct connection over wires you’ve already paid for, we don’t often have to think about bandwidth, hits on the data store or the amount of compute we use - we just know more is better. In a pay-as-you-go model, however, each of these architecture decisions has a potential cost impact. The amount of data you store, the number of times you access it, and the amount you send back all come with a charge. The offset is that you don’t buy anything at all up-front, so that sunk cost is freed up. And financial professionals know that money now is worth more than money later. Saving that up-front cost allows you to invest it in other things. It’s not just that you’re using things that now cost money - it’s that the design itself in distributed computing has a cost impact. That can be a really good thing, such as when you dynamically add capacity for paying customers. If you can tie back the cost of a series of clicks to what a user will pay to do so, you can set a profit margin that is easy to track. Here’s a case in point: Assume you are using a large instance in Windows Azure to compute some data that you retrieve from a SQL Azure database. If you don’t monitor the path of the application, you may not know what you are really using. Since you’re paying by the size of the instance, it’s best to maximize it all the time. Recently I evaluated just this situation, and found that downsizing the instance and adding another one where needed, adding a caching function to the application, moving part of the data into Windows Azure tables not only increased the speed of the application, but reduced the cost and more closely tied the cost to the profit. The key is this: from the very outset - the design - make sure you include metrics to measure for the cost/performance (sometimes these are the same) for your application. Windows Azure opens up awesome new ways of doing things, so make sure you study distributed systems architecture before you try and force in the application design you have on premises into your new application structure.

    Read the article

  • Why is my dual-boot Ubuntu partition showing up as a peripheral "root.disk"?

    - by Don
    I recently installed Ubuntu 12.04, which I had been booting from a usb key, as a dual-boot on my machine running Windows 7. From what I had read online while researching, I was prepared to have to shrink the Windows partition and all that. But I never had to - it really was just a few clicks here and there and it was installed. I'm still pretty confused about it, but whatever, it worked, and the two peacefully coexist on my machine, and I have broken things to fix before I worry about fixing unbroken things. So yesterday I got it in my head to look at my partitions (I was considering making an all new partition to install the Windows 8 Release Preview). What I saw confused me. Here's a screenshot of the disk utility. At this moment, there is nothing connected to my computer, and nothing in any of the optical drives/ports/card readers/etc. Can you help me figure out what's going on here? Don's Machine is, I believe, my Windows partition - that's the name I assigned my machine from Windows Explorer. PQSERVICE is from what I can find online also Windows, but having to do with backup. And SYSTEM REQUIRED, if I browse it in Ubuntu, is definitely something to do with booting, and I believe it is also Windows'. According to the sizes shown, those three together should use up my 500 GB HD. Then further down, as a "peripheral device", it lists that 31 GB disk. This is obviously my Ubuntu (Model:Linux Loop:root.disk), but why is it showing up as a peripheral? So, to sum up those questions and to add some more random ones I had: Why is Ubuntu showing up as a peripheral device? If the Windows sections take up all 500 GB, where does Ubuntu live? If I renamed the disk partitions, would my life become a nightmare (seriously - can I safely rename them)? Why didn't I have to resize the Windows partition in the first place? Would giving Ubuntu more space improve its performance (it hangs alot)? Is it possible to have a partition for each OS (Windows 7 & 8, Ubuntu), a partition for files, and a separate partition for backups? Is this towards the good or bad idea end of the spectrum? @Elfy, would that explain why it keeps hanging? I guess I'll backup my files, rip it out, and reinstall it correctly later on today.

    Read the article

  • Updating an Entity through a Service

    - by GeorgeK
    I'm separating my software into three main layers (maybe tiers would be a better term): Presentation ('Views') Business logic ('Services' and 'Repositories') Data access ('Entities' (e.g. ActiveRecords)) What do I have now? In Presentation, I use read-only access to Entities, returned from Repositories or Services, to display data. $banks = $banksRegistryService->getBanksRepository()->getBanksByCity( $city ); $banksViewModel = new PaginatedList( $banks ); // some way to display banks; // example, not real code I find this approach quite efficient in terms of performance and code maintanability and still safe as long as all write operations (create, update, delete) are preformed through a Service: namespace Service\BankRegistry; use Service\AbstractDatabaseService; use Service\IBankRegistryService; use Model\BankRegistry\Bank; class Service extends AbstractDatabaseService implements IBankRegistryService { /** * Registers a new Bank * * @param string $name Bank's name * @param string $bik Bank's Identification Code * @param string $correspondent_account Bank's correspondent account * * @return Bank */ public function registerBank( $name, $bik, $correspondent_account ) { $bank = new Bank(); $bank -> setName( $name ) -> setBik( $bik ) -> setCorrespondentAccount( $correspondent_account ); if( null === $this->getBanksRepository()->getDefaultBank() ) $this->setDefaultBank( $bank ); $this->getEntityManager()->persist( $bank ); return $bank; } /** * Makes the $bank system's default bank * * @param Bank $bank * @return IBankRegistryService */ public function setDefaultBank( Bank $bank ) { $default_bank = $this->getBanksRepository()->getDefaultBank(); if( null !== $default_bank ) $default_bank->setDefault( false ); $bank->setDefault( true ); return $this; } } Where am I stuck? I'm struggling about how to update certain fields in Bank Entity. Bad solution #1: Making a series of setters in Service for each setter in Bank; - seems to be quite reduntant, increases Service interface complexity and proportionally decreases it's simplicity - something to avoid if you care about code maitainability. I try to follow KISS and DRY principles. Bad solution #2: Modifying Bank directly through it's native setters; - really bad. If you'll ever need to move modification into the Service, it will be pain. Business logic should remain in Business logic layer. Plus, there are plans on logging all of the actions and maybe even involve user permissions (perhaps, through decorators) in future, so all modifications should be made only through the Service. Possible good solution: Creating an updateBank( Bank $bank, $array_of_fields_to_update) method; - makes the interface as simple as possible, but there is a problem: one should not try to manually set isDefault flag on a Bank, this operation should be performed through setDefaultBank method. It gets even worse when you have relations that you don't want to be directly modified. Of course, you can just limit the fields that can be modified by this method, but how do you tell method's user what they can and cannot modify? Exceptions?

    Read the article

  • 80 Years of Supplier Misinformation: How can Oracle Supplier Hub Help?

    - by Mala Narasimharajan
    By Mark Peachy       Well, we're down to the final week before this year's Oracle Open World conference kicks-off on Sunday and there's still plenty of work to be done to be ready in time.  One of the great benefits I think that attendees get from Open World is the opportunity to listen to other organizations talk about their implementation experiences.  Typically, these sessions provide hugely valuable insights that have been gained during a deployment, delivering a wealth of practical information on what it really takes to get an organization up and running with a new module or a revamped business process.And I'm not just saying this because we're lucky enough to have one of our early implementers join us for this year's Supplier Hub/Supplier Lifecycle Management MDM session!  With a multi-phased deployment underway, this customer is working to fix a long, 80-year history without much in the way of formal processes or tools to manage all of their accumulated supplier information.  Faced with a mess of supplier details, they had been challenged to efficiently track supplier spend, monitor performance, maintain qualification information or carry out meaningful risk analysis.  Join us on Wednesday to hear how they are addressing these issues and the plans they have to evolve their supplier management techniques - it's a great story.CON9242:  Oracle Supplier Lifecycle Management and Oracle Supplier Hub for Better Supply Base Management Wednesday, October 3rd at 1:15 PM                                                                                                                                                InterContinental Hotel, Sutter Suite

    Read the article

  • What are the drawbacks of sending XML to browsers and let them apply XSLT?

    - by MainMa
    Context Working as a freelance developer, I often made websites completely based on XSLT. In other words, on every request, an XML file is generated, containing everything we need to know about the page content: the name of the user currently logged in, the top menu entries, if this menu is dynamic/configurable, the text to display in a specific area of the page, etc. Then XSL process (caches, etc.) it to HTML/XHTML page to send to the browser. It has a good point to make it easier to create small-scale websites, especially with PHP. It is a sort of template engine, but which I prefer to other template engines because it's much more powerful than most of template engines, and because I know it better and like it. It is also possible, when need, to give an access to raw XML data on demand for an automated access, without the need to create separate APIs. Of course, it will fail completely on any medium-scale or large-scale website, since, even with good caching techniques, XSL still degrades overall website performance and requires more CPU serverside. Question Modern browsers have the ability to take an XML file and to transform it with an associated XSL file declared in XML like <?xml-stylesheet href="demo.xslt" type="text/xsl"?>. Firefox 3 can do it. Internet Explorer 8 can do it too. It means that it is possible to migrate XSL processing from the server to the client side for 50% of users (according on browser statistics on several websites where I may want to implement this). It means that those 50% of users will receive only the XML file at each request, thus reducing their and server's bandwidth (XML file being much shorter than its processed HTML analog), and reducing server's CPU usage. What are the drawbacks of this technique? I thought about several ones, but it doesn't apply in this situation: Difficult implementation and the need to choose, based on the browser request, when to send raw XML and when to transform it to HTML instead. Obviously, the system will not be much more difficult then the actual one. The only change to make is to add XSL file link to every XML, and to add a browser check. More IO and bandwidth usage, since the XSLT file will be downloaded by the browsers, instead of being cached by the server. I don't think it will be a problem, since XSLT file will be cached by the browsers (like images, or CSS, or JavaScript files are cached actually). Possibly some problems on client side, like maybe problems when saving a page in some browsers. Difficulty to debug code: it is impossible to obtain an HTML source the browser is actually using, since the only displayed source is the downloaded XML. On the other hand, I rarely go look at HTML code on client side, and in most cases, it is unusable directly (whitespace being removed).

    Read the article

  • Oracle Database In-Memory

    - by Mike.Hallett(at)Oracle-BI&EPM
    Normal 0 false false false EN-GB X-NONE X-NONE Larry Ellison unveiled the next major milestone in database technology, Oracle Database In-Memory, on June 10, 2014. Oracle Database In-Memory will be generally available in July 2014 and can be used with all hardware platforms on which Oracle Database 12c is supported. This option will accelerate database performance by orders of magnitude for analytics, data warehousing, and reporting while also speeding up online transaction processing (OLTP). It allows any existing Oracle Database-compatible application to automatically and transparently take advantage of columnar in-memory processing, without additional programming or application changes. Benefits Fast ad-hoc analytics without the need to pre-create indexes Completely transparent to existing applications Faster mixed workload OLTP No database size limit Industrial strength availability and security Robustness and maturity of Oracle Database 12c To find out more see Oracle Database In-Memory Comment from Rittman Mead on Oracle In-Memory Option Launch  ... and I will let you know how this unfolds in regards to advantages for OBI11g and Exalytics and Big Data over the coming months. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • 2D Animation Smoothness - Delta time vs. Kinematics

    - by viperld002
    I'm animating a sprite in 2D with key frames of rotation and xy-positions. I've recently had a discussion with someone saying that when the device (happens to be an iPad using cocos2D) hits a performance bump due to whatever else the user may be doing, lag will arise and that the best way to fight it is to not use actual positions, but velocities, accelerations and torques with kinematics. His message is to evaluate the positions and rotations from these speeds at the current point in time. I've never experienced a situation where I've heard of using kinematics to stem lag in 2D animations and am not sure of how effective it could be. Also, it seems to be overkill. The application is not networked so it's all running on a local device. The desired effect is that the animation always plays as closely as it can to the target frame rate. Wouldn't the technique suffer the same problems as just using the time since the last frame or a fixed time step since the kinematics would also require some time value to perform the calculation? What techniques could you suggest to best achieve the desired effect? EDIT 1 Thank you for your responses, they are very illuminating. I want to clarify my question before choosing an answer however, to make sure that this post really serves it's purpose. I have a sprite of a ball, and a text file with 3 arrays worth of information (rotation,translations x, translations y) with each unit of information existing as a key frame to be stepped through (0 to 49 and back to 0 to replay it again). I have this playing by interpolating from the current key frame to the next, every n-units of time. The animation is visibly correct when compared to a video I was given of it, and it is smooth because of the interpolations between the key frames. This is the existing state of the project. There are no physics simulated, only a static animation of a ball moving in a way an artist specifically designed. Should I, instead of rotation in degrees and translations by positions in space, derive velocities, accelerations and torques to express this static animation as a function of time? As in, position now = foo(time now), where foo uses kinematics.

    Read the article

  • Using HBase or Cassandra for a token server

    - by crippy
    I've been trying to figure out how to use HBase/Cassandra for a token system we're re-implementing. I can probably squeeze quite a lot more from MySQL, but it just seems it has come to clinging on to the wrong tool for the task just because we know it well. Eventually will hit a wall (like happened to us in other areas). Naturally I started looking into possible NoSQL solutions. The prominent ones (at least in terms of buzz) are HBase and Cassandra. The story is more or less like this: A user can send a gift other users. Each gift has a list of recipients or is public in which case limited by number or expiration date For each gift sent we generate some token that uniquely identifies that gift. For each gift we track the list of potential recipients and their current status relating to that gift (accepted, declinded etc). A user can request to see all his currently pending gifts A can request a list of users he has sent a gift to today (used to limit number of gifts sent) Required the ability to "dump" or "ignore" expired gifts (x day old gifts are considered expired) There are some other requirements but I believe the above covers the essentials. How would I go and model that using HBase or Cassandra? Well, the wall was performance. A few 10s of millions of records per day over 2 tables kept for 2 weeks (wish I could have kept it for more but there was no way). The response times kept getting slower and slower until eventually we had to start cutting down number of days we kept data. Caching helps here but it's not an ideal solution since a big part of the ops are updates. Also, as I hinted in my original post. We use MySQL extensively. We know exactly what it can and can't do both in naive implementations followed by native partitioning and finally by horizontally sharding our dataset on the application level to reside on multiple DB nodes. It can be done, but that's not really what I'm trying to get from this. I asked a very specific question about designing a solution using a NoSQL solution since it's very hard to find examples for designs out there. Brainlag, not trying to come off as rude. I actually appreciate it a lot that you are the only one who even bothered to respond. but I see it over and over again. People ask questions and others assume they have no idea what they're talking about and give an irrelevant answer. Ignore RDBMS please. The question is about nosql.

    Read the article

< Previous Page | 496 497 498 499 500 501 502 503 504 505 506 507  | Next Page >