Search Results

Search found 17124 results on 685 pages for 'learning tool'.

Page 557/685 | < Previous Page | 553 554 555 556 557 558 559 560 561 562 563 564  | Next Page >

  • OSB 11g & SAP – Single Channel/Program ID for Multiple IDOCs

    - by Shub Lahiri, A-Team
    Background This note is a supplement to the blog entry, SOA 11g & SAP – Single Channel/Program ID for Multiple IDOCs by Greg Mally. Greg has shown how a single SOA Suite composite can be used with iWay Adapters to receive multiple IDOC types via a single channel in the adapter, corresponding to a single programID on the SAP system. We will try to address the same requirements within the OSB framework here. Project Built - Design Time The basic build of an OSB project with iWay SAP Adapter, as seen in another entry in this blog, consists of working in OSB Design console and Application Explorer. OSB Design Time - Part 1 We will create a placeholder project first in OSB with a proper directory structure, so that we can export the WSDL, XSD and the JCA binding information from Application Explorer directly into this project. Application Explorer - iWay Design Time Tool Receiving IDOCs is classified as an inbound event within Application Explorer. For setting up events, a channel is first defined (e.g. iDoc_Channel) using the same PROGRAMID (RFC destination), as defined within SAP for the OSB server. Next, the same channel is used to export the JCA Inbound Event artifacts for the candidate IDOC, e.g. DEBMAS06 directly to the pre-created OSB project. Note that the validation for schema has been turned off. As a result, this will allow the adapter, at runtime, to use a single channel to receive multiple IDOC types from SAP and pass them on to the OSB runtime engine without any validation. In other words, we do not have to repeat the above step for each IDOC type. OSB Design Time - Part 2 Create 2 simple XML based Business Services to write to a file, e.g.  SAP_DEBMAS_File and SAP_MATMAS_File. Next, generate a Proxy Service using the JCA binding file exported from Application Explorer in the previous section. In the generated proxy service, edit the message flow and add a route node. Add a routing table in the route node with the following routing function. fn:local-name-from-QName(fn:node-name($body/*[1])) This function takes advantage of the fact that the XML payload at runtime, after translation by adapter, has the IDOC type as the top element. With the routing function in place, build the routing table to add 2 branches to route the IDOCs to the appropriate Business Service for writing the XML payload to files in separate directories. This completes the build of the OSB project. Testing - Run-Time After deployment and activation, the SAP adapter will wait to receive multiple types of IDOCs sent from the SAP system using a single channel. Upon receipt of the IDOCs, the OSB project will route them appropriately to save the corresponding XML payloads for different IDOC types in different directories.

    Read the article

  • Book Review: Oracle ADF 11gR2 Development Beginner's Guide

    - by Grant Ronald
    Packt Publishing asked me to review Oracle ADF 11gR2 Development Beginner's Guide by Vinod Krishnan, so on a couple of long flights I managed to get through the book in a couple of sittings. One point to make clear before I go into the review.  Having authored "The Quick Start Guide to Fusion Development: JDeveloper and Oracle ADF", I've written a book which covers the same topic/beginner level.  I also think that its worth stating up front that I applaud anyone who has gone  through the effort of writing a technical book. So well done Vinod.  But on to the review: The book itself is a good break down of topic areas.  Vinod starts with a quick tour around the IDE, which is an important step given all the work you do will be through the IDE.  The book then goes through the general path that I tend to always teach: a quick overview demo, ADF BC, validation, binding, UI, task flows and then the various "add on" topics like security, MDS and advanced topics.  So it covers the right topics in, IMO, the right order.  I also think the writing style flows nicely as well - Its a relatively easy book to read, it doesn't get too formal and the "Have a go hero" hands on sections will be useful for many. That said, I did pick out a number of styles/themes to the writing that I found went against the idea of a beginners guide.  For example, in writing my book, I tried to carefully avoid talking about topics not yet covered or not yet relevant at that point in someone's learning.  So, if I was a new ADF developer reading this book, did I really need to know about ADFBindingFilter and DataBindings.cpx file on page 58 - I've only just learned how to do a drag and drop simple application so showing me XML configuration files relevant to JSF/ADF lifecycle is probably going to scare me off! I found this in a couple of places, for example, the security chapter starts on page 219 but by page 222 (and most of the preceding pages are hands-on steps) we're diving into the web.xml, weblogic.xml, adf-config.xml, jsp-config.xml and jazn-data.xml.  Don't get me wrong, I'm not saying you shouldn't know this, but I feel you have to get people on a strong grounding of the concepts before showing them implementation files.  If having just learned what ADF Security is will "The initialization parameter remove.anonymous.role is set to false for the JpsFilter filter as this filter is the first filter defined in the file" really going to help me? The other theme I found which I felt didn't work was that a couple of the chapters descended into a reference guide.  For example page 159 onwards basically lists UI components and their properties.  And page 87 onwards list the attributes of ADF BC in pretty much the same way as the on line help or developer guide, and I've a personal aversion to any sort of help that says pretty much what the attribute name is e.g. "Precision Rule: this option is used to set a strict precision rule", or "Property Set: this is the property set that has to be applied to the attribute". Hmmm, I think I could have worked that out myself, what I would want to know in a beginners guide are what are these for, what might I use them for...and if I don't need to use them to create an emp/dept example them maybe it’s better to leave them out. All that said, would the book help me - yes it would.  It’s obvious that Vinod knows ADF and his style is relatively easy going and the book covers all that it has to, but I think the book could have done a better job in the educational side of guiding beginners.

    Read the article

  • Oracle Tutor: XPDL conversion (and why you should care)

    - by mary.keane
    You may have noticed that the Oracle Business Process Converter feature in Tutor 14 supports "XPDL" conversion to Oracle Business Process Analysis Suite (BPA), Oracle Business Process Management Suite (BPM), and Oracle Tutor, and you may have briefly wondered "what is XPDL?" before you moved on to the Visio import feature (a very popular feature in Tutor 14). This posting is for those who do not yet understand (or care) about XPDL and process modeling. Many of us (and I'm including myself) have spent years working in the process definition arena: we've written procedures, designed systems and software to help others write procedures, and have been responsible for embedding policies and procedures into training material for employees. We've worked with tools such as Oracle Tutor, Microsoft Visio, Microsoft Word, and UPK. Most of us have never worked with "modeling tools" before, and we certainly never had to understand BPMN. It's a brave new world in this arena, and companies desperately need people with policy and procedural system expertise to be able to work with system analysts so there is a seamless transfer of knowledge from IT to employees. When working with applications, a picture is worth a thousand words, so eventually you're going to need to understand and be able to work with business process models. XPDL is an acronym for XML Process Definition Language, and it is an interchange format for business process models. It allows you to take a BPMN model that was developed in one workflow application such as BizAgi and import it into another workflow application or a true BPMN management system such as Oracle BPM. Specifically, the XPDL format contains the graphical information of a model as well as any executable information. By using a common format, models can be moved from a basic modeling application used by business owners to applications used by system architects. Over 80 applications support the XPDL format, including MetaStorm ProVision, BEA ALBPM, BizAgi, and Tibco. I mention these applications because we have provided XSLT mapping files specifically for these vendors. Oracle Business Process Converter was designed with user extensibility in mind, and thus users can add their own XML files so that additional XPDL models from other vendors can be converted to BPM, BPA, and Oracle Tutor. Instructions on how to add your own files can be found in Appendix 4 of the Oracle Business Converter manual. Let's take a visual look at how this works. Here is an example of a model devloped in BizAgi: This model can be created by the average business user without a large learning curve, and it's a good start for the system analyst who will be adding web services as well as for the business manager who manages the process described in the model. By exporting this model as XPDL, the information can be converted into Oracle BPA and Oracle BPM as well as converted to Oracle Tutor to become the framework for a procedure. Through this conversion feature, one graphic illustration of a business process can be used by a system analyst, business analyst, business manager, and employee, as seen below. Model Converted to Tutor Procedure Below is the task section of the procedure after conversion from an XPDL file. Model converted to BPA Model converted to BPM End users still want step by step instructions on how to perform their jobs, so procedures (Oracle Tutor) and application simulations (UPK) are still a critical piece of the solution. But IT professionals need graphic descriptions of how the applications work, regardless of whether there are any tasks involving humans. Now there is a way to convert procedures (Oracle Tutor docx files) and basic models (XPDL files) so that business managers and system analysts can share process information. References Wikipedia XPDL. Workflow Management Coalition, XPDL Support and Resources Oracle Business Process Converter manual, Oracle Tutor 14 Oracle Business Process Management 11g If you have any XPDL conversion stories to share, we'd love to hear from you. Best wishes for the coming new year, Mary Keane, Senior Development Manager, Oracle Tutor and BPM

    Read the article

  • SQL SERVER – Solution – User Not Able to See Any User Created Object in Tables – Security and Permissions Issue

    - by pinaldave
    There is an old quote “A Picture is Worth a Thousand Words”. I believe this quote immensely. Quite often I get phone calls that something is not working if I can help. My reaction is in most of the cases, I need to know more, send me exact error or a screenshot. Until and unless I see the error or reproduce the scenario myself I prefer not to comment. Yesterday I got a similar phone call from an old friend, where he was not sure what is going on. Here is what he said. “When I try to connect to SQL Server, it lets me connect just fine as well let me open and explore the database. I noticed that I do not see any user created instances but when my colleague attempts to connect to the server, he is able to explore the database as well see all the user created tables and other objects. Can you help me fix it? “ My immediate reaction was he was facing security and permission issue. However, to make the same recommendation I suggested that he send me a screenshot of his own SSMS and his friend’s SSMS. After carefully looking at both the screenshots, I was very confident about the issue and we were able to resolve the issue. Let us reproduce the same scenario and many there is some learning for us. Issue: User not able to see user created objects First let us see the image of my friend’s SSMS screen. (Recreated on my machine) Now let us see my friend’s colleague SSMS screen. (Recreated on my machine) You can see that my friend could not see the user tables but his colleague was able to do the same for sure. Now I believed it was a permissions issue. Further to this I asked him to send me another image where I can see the various permissions of the user in the database. My friends screen My friends colleagues screen This indeed proved that my friend did not have access to the AdventureWorks database and because of the same he was not able to access the database. He did have public access which means he will have similar rights as guest access. However, their SQL Server had followed my earlier advise on having limited access for guest access, which means he was not able to see any user created objects. My next question was to validate what kind of access my friend’s colleague had. He replied that the colleague is the admin of the server. I suggested that if my friend was suppose to have admin access to the database, he should request of having admin access to his colleague. My friend promptly asked for the same to his colleague and on following screen he added him as an admin. You can do the same using following T-SQL script as well. USE [AdventureWorks2012] GO ALTER ROLE [db_owner] ADD MEMBER [testguest] GO Once my friend was admin he was able to access all the user objects just like he was expecting. Please note, this complete exercise was done on a development server. One should not play around with security on live or production server. Security is such an issue, which should be left with only senior administrator of the server. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Security, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • New Working Environment Starting November

    - by Jenson
    Normal 0 false false false EN-GB X-NONE X-NONE MicrosoftInternetExplorer4 This is actually a post dated update. After I’ve been working in the private sectors for so many years (the 2 years when I was working as IT trainer in a secondary school is not counted, as I was working under a contract with a private IT training agency), I’ve decided to try my luck into public sector. And fortunately, I passed the interview and I was offered a position of Web Administrator in a government statutory board, that’s Agency for Science, Technology and Research (of Singapore). During my previous employment with a Japanese MNC (multinational company), it was a totally new environment for me, as I had never worked for a Japanese company before, but the first time I work for Japanese company also gave me the very first nightmare I have with them, and vowed not to work for them anymore, and any other Japanese companies. No doubt I have freedom of choosing the tools and methods I wish to use for the projects, but the project management is simply too messy and out of order. And a lot of time, I don’t find that everyone is working as a team, more like achieving their own goals. Accountability for project is not shared, all lumped onto the shoulders of the developer in charge (they called it Software Engineer). I was working on a windows based .NET project, which I already voiced out that it’s not manageable by just 1 software engineer, but it seems like nobody cares, even the one who propose the solution to customer doesn’t care much. What he cares is whether you deliver the project on time so that he can please his customer and the senior management of his good work. Too many stories to tell, and I just simple doesn’t want to talk too much on this as it has already became the past to me. With my new title with the government agency, I hope to contribute my best to them, while learning as much as I can. I will share whatever I can on technologies, methodologies, and etc whichever I’m allowed and permitted to (of course, for those non-work-related stuff, I would be glad to share with you without much hesitation). Thank you! /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Synchronizing ODSEE and OUD

    - by Etienne Remillon
    When it comes to synchronizing between ODSEE and OUD, what should be the best options ? Couple  options are available - Use one of OUD internal capability called Replication Gateway - Use our synchronization tool called Directory Integration Platform part of Oracle Directory Services Plus - Manuel export and import Let's check pro and cons on each method. Replication Gateway is the natural, out of the box solution to perform the task. We created this as a feature of OUD because it works at our replication protocol level. The gateway perform the required adaptation between the ODSEE's replication protocol and OUD's one. The benefits of doing this is that it provide strong consistency between the to type of directories. This fully leverage conflict management implemented in the replication protocols to ensure that changes are applied in a coherent and ordered manner. It does not require specific modification on existing ODSEE production instances such as turning on "retro changelog". Changes are propagated at near speed of replication in both directions. Replication Gateway can also synchronize information that are stored internally in the directory server such as "xxxxx" account locking managed at ODSEE server level and not via the nsyyyy attribute. OUD replication gateway does no require any specific tools or installation specific procedure. It is manged like other OUD component with monitoring and configuration via the standard console. OUD Replication Gateway does not perform adaptation between ODSEE and OUD. Using Directory Integration Protocol as external component to OUD, brings flexibility in remapping and transformations between ODSEE and OUD. There is a price to pay in using DIP to perform the synchronization task. You will have to turn on the retro change log to get access to changes on the ODSEE side (this will impact disk and CPU usage and performances which could be a serious challenge for your existing ODSEE environment (if you have not provisioned additional hardware and instances). You will not benefits of conflict resolution management and this might have to be addressed at application level, which is not always possible to implement. Using export and import seams very simple, but this methodology cannot ensure an highly available deployment with up to date entries on booth sides. This solution can be used if full HA with up-to-date data is not needed (during synchronization time). It often used  if data-cleaning need to take place to avoid polluting a new environment with old un-necessary data.

    Read the article

  • New Oracles VM RAC template with support for oracle vm 3 built-in

    - by wcoekaer
    The RAC team did it again (thanks Saar!) - another awesome set of Oracle VM templates published and uploaded to My Oracle Support. You can find the main page here. What's special about the latest version of DeployCluster is that it integrates tightly with Oracle VM 3 manager. It basically is an Oracle VM frontend that helps start VMs, pass arguments down automatically and there is absolutely no need to log into the Oracle VM servers or the guests. Once it completes, you have an entire Oracle RAC database setup ready to go. Here's a short summary of the steps : Set up an Oracle VM 3 server pool Download the Oracle VM RAC template from oracle.com Import the template into Oracle VM using Oracle VM Manager repository - import Create a public and private network in Oracle VM Manager in the network tab Configure the template with the right public and private virtual networks Create a set of shared disks (physical or virtual) to assign to the VMs you want to create (for ASM/at least 5) Clone a set of VMs from the template (as many RAC nodes as you plan to configure) With Oracle VM 3.1 you can clone with a number so one clone command for, say 8 VMs is easy. Assign the shared devices/disks to the cloned VMs Create a netconfig.ini file on your manager node or a client where you plan to run DeployCluster This little text file just contains the IP addresses, hostnames etc for your cluster. It is a very simple small textfile. Run deploycluster.py with the VM names as argument Done. At this point, the tool will connect to Oracle VM Manager, start the VMs and configure each one, Configure the OS (Oracle Linux) Configure the disks with ASM Configure the clusterware (CRS) Configure ASM Create database instances on each node. Now you are ready to log in, and use your x node database cluster. x No need to download various products from various websites, click on trial licenses for the OS, go to a Virtual Machine store with sample and test versions only - this is production ready and supported. Software. Complete. example netconfig.ini : # Node specific information NODE1=racnode1 NODE1VIP=racnode1-vip NODE1PRIV=racnode1-priv NODE1IP=192.168.1.2 NODE1VIPIP=192.168.1.22 NODE1PRIVIP=10.0.0.22 NODE2=racnode2 NODE2VIP=racnode2-vip NODE2PRIV=racnode2-priv NODE2IP=192.168.1.3 NODE2VIPIP=192.168.1.23 NODE2PRIVIP=10.0.0.23 # Common data PUBADAP=eth0 PUBMASK=255.255.255.0 PUBGW=192.168.1.1 PRIVADAP=eth1 PRIVMASK=255.255.255.0 RACCLUSTERNAME=raccluster DOMAINNAME=mydomain.com DNSIP= # Device used to transfer network information to second node # in interview mode NETCONFIG_DEV=/dev/xvdc # 11gR2 specific data SCANNAME=racnode12-scan SCANIP=192.168.1.50

    Read the article

  • What are the arguments against parsing the Cthulhu way?

    - by smarmy53
    I have been assigned the task of implementing a Domain Specific Language for a tool that may become quite important for the company. The language is simple but not trivial, it already allows nested loops, string concatenation, etc. and it is practically sure that other constructs will be added as the project advances. I know by experience that writing a lexer/parser by hand -unless the grammar is trivial- is a time consuming and error prone process. So I was left with two options: a parser generator à la yacc or a combinator library like Parsec. The former was good as well but I picked the latter for various reasons, and implemented the solution in a functional language. The result is pretty spectacular to my eyes, the code is very concise, elegant and readable/fluent. I concede it may look a bit weird if you never programmed in anything other than java/c#, but then this would be true of anything not written in java/c#. At some point however, I've been literally attacked by a co-worker. After a quick glance at my screen he declared that the code is uncomprehensible and that I should not reinvent parsing but just use a stack and String.Split like everybody does. He made a lot of noise, and I could not convince him, partially because I've been taken by surprise and had no clear explanation, partially because his opinion was immutable (no pun intended). I even offered to explain him the language, but to no avail. I'm positive the discussion is going to re-surface in front of management, so I'm preparing some solid arguments. These are the first few reasons that come to my mind to avoid a String.Split-based solution: you need lot of ifs to handle special cases and things quickly spiral out of control lots of hardcoded array indexes makes maintenance painful extremely difficult to handle things like a function call as a method argument (ex. add( (add a, b), c) very difficult to provide meaningful error messages in case of syntax errors (very likely to happen) I'm all for simplicity, clarity and avoiding unnecessary smart-cryptic stuff, but I also believe it's a mistake to dumb down every part of the codebase so that even a burger flipper can understand it. It's the same argument I hear for not using interfaces, not adopting separation of concerns, copying-pasting code around, etc. A minimum of technical competence and willingness to learn is required to work on a software project after all. (I won't use this argument as it will probably sound offensive, and starting a war is not going to help anybody) What are your favorite arguments against parsing the Cthulhu way?* *of course if you can convince me he's right I'll be perfectly happy as well

    Read the article

  • `make install` fails apparently due to typo, but not in makefile: How to find and fix?

    - by Archelon
    I'm trying to install the fujitsu-usb-touchscreen drivers from here, on Kubuntu 12.04 on my new Fujitsu LifeBook P1630. (See fujitsu-usb-touchscreen on kubuntu 13.04 (64-bit) on P1630: `make` errors.) I downloaded the .zip file, unzipped it, and ran make in the directory thus created; this all worked as expected. However, when I run sudo checkinstall (which invokes make install), things go less well. On the first attempt the installation aborted with the following error: make: execvp: /etc/init.d/fujitsu_touchscreen: Permission denied make: *** [install] Error 127 I eventually resolved this by $ sudo chmod +x /etc/init.d/fujitsu_touchscreen But although a second sudo checkinstall then does not give the execvp error, it still fails at a later stage, and the log (on stdout) shows this dpkg error: dpkg: error processing /home/archelon/fujitsu-touchscreen-driver/cybergene-fujitsu-usb-touchscreen-112fdb75b406/cybergene-fujitsu-usb-touchscreen-112fdb75b406_amd64.deb (--install): unable to create `/sys/module/fujitsu/usb/touchscreen/parameters/touch_maxy.dpkg-new' (while processing `/sys/module/fujitsu/usb/touchscreen/parameters/touch_maxy'): No such file or directory And, indeed, there is no /sys/module/fujitsu/usb/touchscreen/parameters/touch_maxy; there is, however, /sys/module/fujitsu_usb_touchscreen/parameters/touch_maxy, and this is presumably what was intended. But this incorrect filename does not appear in the makefile or any other file in the directory, at least not that I can find. Nor does it appear, as I discovered after running sudo checkinstall --install=no as suggested below, in the .deb package created by checkinstall. Where might such a typographical error be originating, and how would I go about fixing it? Edited to add: I'm viewing the contents of the .deb file with ark, Kubuntu's default tool. It contains only three files: control.tar.gz, data.tar.gz, and debian-binary. data.tar.gz contains the directory tree that appears to match up to the usual root filesystem, with /etc, /lib, /sys, and /usr directories. (Looking at other .deb files on my system, this structure appears to be typical.) Here's a screenshot: . (Full size.) Here's another screenshot showing that control.tar.gz contains three files, one of which is empty: . (Full size.) Here's the actual .deb file: https://www.dropbox.com/s/odwxxez0fhyvg7a/cybergene-fujitsu-usb-touchscreen_112fdb75b406-1_amd64.deb Edited 2013-09-28 to add: After reinstalling Kubuntu 12.04 again, this time recreating the /home partition (which, again, had been generated during an install of 13.04), I can no longer reproduce this error. I am still curious to know how the underscores got changed to slashes, but it looks as though nobody has any idea. It is perhaps also of interest to note that while I have still not successfully run checkinstall against this package, I have done make install; it requires the executabilization of /etc/init.d/fujitsu_touchscreen and the installation of hal, and the GUI freezes shortly after installation completes, and there is no particular new functionality afterwards that I have noticed, and the system can no longer resume from being suspended; however, this will be pursued elsewhere.

    Read the article

  • Cool Tools You Can Use: Validation Templates for PeopleSoft Contracts Processes

    - by Mark Rosenberg
    This is the first in a series of postings we’ll be making under the heading of Cool Tools You Can Use. Our PeopleSoft product management team identified the need for this series after reflecting on the many conversations we have each year with our PeopleSoft community members. During these conversations, we were discovering that customers and implementation partners were often not aware that solutions exist to the problems they were trying to address and that the solutions were readily available at no additional charge. Thus, the Cool Tools You Can Use series will describe the business challenge we’ve heard, the PeopleSoft solution to the challenge, and how you can learn more about the solution so that everyone can be sure to make full use of what PeopleSoft applications have to offer. The first cool tool we’ll look at is the Validation Template for PeopleSoft Contracts Process Requests, which was first released in December 2013 as part of PeopleSoft Contracts 9.2 Update Image 4. The business issue our customers highlighted to us is the need to tightly control but easily configure and manage the scope of data that any user can process when initiating a process. Control of each user’s span of impact is essential to reducing billing reconciliation issues, passing span of authority audits, and reducing (or even eliminating) the frequency of unexpected process results.  Setting Up the Validation Template for a PeopleSoft Contracts Process With the validation template, organizations can easily and quickly ensure the software restricts the scope of transactions a user can affect and gives organizations the confidence to know that business processes are being governed effectively. Additionally, this control of PeopleSoft Contracts process requests can be applied and easily maintained and adjusted from a web browser thereby enabling analysts to administer the rules without having to engage software developers to customize the software. During the field validation template setup, an analyst specifies the combinations of fields that must contain values when a user tries to setup a run control and initiate a PeopleSoft Contracts process from a process request page. For example, for the Process Limits component, an organization could require that users enter a valid combination of values for the business unit, contract, and contract type fields or a value in the contract administrator field. Until the user enters a valid combination of entries on the process request page, he cannot launch the process. With the validation template activated for process request pages, organizations can be confident that PeopleSoft Contracts users will not accidentally begin generating invoices or triggering other revenue management processes for transactions beyond their scope of authority. To learn more about the Validation Template, please review the Defining Validation Templates section of the PeopleSoft Contracts PeopleBooks. 

    Read the article

  • How to use DI and DI containers

    - by Pinetree
    I am building a small PHP mvc framework (yes, yet another one), mostly for learning purposes, and I am trying to do it the right way, so I'd like to use a DI container, but I am not asking which one to use but rather how to use one. Without going into too much detail, the mvc is divided into modules which have controllers which render views for actions. This is how a request is processed: a Main object instantiates a Request object, and a Router, and injects the Request into the Router to figure out which module was called. then it instantiates the Module object and sends the Request to that the Module creates a ModuleRouter and sends the Request to figure out the controller and action it then creates the Controller and the ViewRenderer, and injects the ViewRenderer into the Controller (so that the controller can send data to the view) the ViewRenderer needs to know which module, controller and action were called to figure out the path to the view scripts, so the Module has to figure out this and inject it to the ViewRenderer the Module then calls the action method on the controller and calls the render method on the ViewRenderer For now, I do not have any DI container set up, but what I do have are a bunch of initX() methods that create the required component if it is not already there. For instance, the Module has the initViewRenderer() method. These init methods get called right before that component is needed, not before, and if the component was already set it will not initialize it. This allows for the components to be switched, but it does not require manually setting them if they are not there. Now, I'd like to do this by implementing a DI container, but still keep the manual configuration to a bare minimum, so if the directory structure and naming convention is followed, everything should work, without even touching the config. If I use the DI container, do I then inject it into everything (the container would inject itself when creating a component), so that other components can use it? When do I register components with the DI? Can a component register other components with the DI during run-time? Do I create a 'common' config and use that? How do I then figure out on the fly which components I need and how they need to be set up? If Main uses Router which uses Request, Main then needs to use the container to get Module (or does the module need to be found and set beforehand? How?) Module uses Router but needs to figure out the settings for the ViewRenderer and the Controller on the fly, not in advance, so my DI container can't be setting those on the Module before the module figures out the controller and action... What if the controller needs some other service? Do I inject the container into every controller? If I start doing that, I might just inject it into everything... Basically I am looking for the best practices when dealing with stuff like this. I know what DI is and what DI containers do, but I am looking for guidance to using them in real life, and not some isolated examples on the net. Sorry for the lengthy post and many thanks in advance.

    Read the article

  • Clever memory usage through the years

    - by Ben Emmett
    A friend and I were recently talking about the really clever tricks people have used to get the most out of memory. I thought I’d share my favorites, and would love to hear yours too! Interleaving on drum memory Back in the ye olde days before I’d been born (we’re talking the 50s / 60s here), working memory commonly took the form of rotating magnetic drums. These would spin at a constant speed, and a fixed head would read from memory when the correct part of the drum passed it by, a bit like a primitive platter disk. Because each revolution took a few milliseconds, programmers took to manually arranging information non-sequentially on the drum, timing when an instruction or memory address would need to be accessed, then spacing information accordingly around the edge of the drum, thus reducing the access delay. Similar techniques were still used on hard disks and floppy disks into the 90s, but have become irrelevant with modern disk technologies. The Hashlife algorithm Conway’s Game of Life has attracted numerous implementations over the years, but Bill Gosper’s Hashlife algorithm is particularly impressive. Taking advantage of the repetitive nature of many cellular automata, it uses a quadtree structure to store the hashes of pieces of the overall grid. Over time there are fewer and fewer new structures which need to be evaluated, so it starts to run faster with larger grids, drastically outperforming other algorithms both in terms of speed and the size of grid which can be simulated. The actual amount of memory used is huge, but it’s used in a clever way, so makes the list . Elite’s procedural generation Ok, so this isn’t exactly a memory optimization – more a storage optimization – but it gets an honorable mention anyway. When writing Elite, David Braben and Ian Bell wanted to build a rich world which gamers could explore, but their 22K memory was something of a limitation (for comparison that’s about the size of my avatar picture at the top of this page). They procedurally generated all the characteristics of the 2048 planets in their virtual universe, including the names, which were stitched together using a lookup table of parts of names. In fact the original plans were for 2^52 planets, but it was decided that that was probably too many. Oh, and they did that all in assembly language. Other games of the time used similar techniques too – The Sentinel’s landscape generation algorithm being another example. Modern Garbage Collectors Garbage collection in managed languages like Java and .NET ensures that most of the time, developers stop needing to care about how they use and clean up memory as the garbage collector handles it automatically. Achieving this without killing performance is a near-miraculous feet of software engineering. Much like when learning chemistry, you find that every time you think you understand how the garbage collector works, it turns out to be a mere simplification; that there are yet more complexities and heuristics to help it run efficiently. Of course introducing memory problems is still possible (and there are tools like our memory profiler to help if that happens to you) but they’re much, much rarer. A cautionary note In the examples above, there were good and well understood reasons for the optimizations, but cunningly optimized code has usually had to trade away readability and maintainability to achieve its gains. Trying to optimize memory usage without being pretty confident that there’s actually a problem is doing it wrong. So what have I missed? Tell me about the ingenious (or stupid) tricks you’ve seen people use. Ben

    Read the article

  • Gartner PCC: A Shovel & Some Ah-Ha's

    - by kellsey.ruppel
    When Gartner Vice President and leading analyst Whit Andrews kicked off the Gartner Portals, Content & Collaboration Summit on Monday, March 12 at the Gaylord Palms in Orlando, FL by bringing a shovel to the stage, eyebrows raised and a few thoughts went through my head. Either this guy plans to go help the construction workers outside construct that new pool at the Gaylord or he took a wrong turn and is at the wrong conference. Oh and how did he get that shovel through airport security? As Whit explained more his objective became more clear…take everything anyone has ever told you about portals and throw it out the window, as portals have evolved and times they are most certainly changing. The future Web is here, available not only on browsers but also via a broad spectrum of access points, including automobiles, consumer electronics and more and more mobile devices. Not merely prevalent, the future Web is also multimedia-driven and operates in real time, driven by mobility, social media, streaming video and other dynamic services. Applications and user experiences are in the midst of an evolution — from the early, simple mobile Web models to today’s Web 2.0 mobile apps and, ultimately, to a world of predominantly Web apps. Additionally, cloud services will forever change how portals and user experience are designed, built, delivered, sourced and managed. So what does this mean for you? Today’s organizations need software that will enable them to not just do their jobs, but to do it in a way that is familiar and easy for them.  What does this mean for IT? Use software and technology as an enabler, not as a roadblock. Overall, we had a great week in Orlando learning about how to improve the user experience, manage content explosion, launch social initiatives, transition to mobile environments and understand cloud and SaaS options.  We had some great conversations throughout the conference and at the Oracle booth. Lots of demonstrations were given of Oracle WebCenter Sites and Oracle Social Network. And as Christie mentioned earlier this week, our Vice President of Product Management and Strategy for WebCenter Loren Weinberg presented on the topic of customer engagement and talked about how organization’s relationships with their customers have fundamentally changed today and the resulting impact that has on their priorities.  Loren also talked about the importance of customer engagement, why that matters now more than ever, and what you can do to help your company or organization succeed in this new world. The question asked in every keynote and session was a simple one: What is your “ah-ha” moment? I personally had quite a few, some of which I’ve captured below. 70% of internal social initiatives eventually fail. By 2014, refusing to communicate with consumers via social media will be as harmful as ignoring emails/phone calls is today. Customer engagement = multi-channel + social & interactive + personal & relevant + optimized. If people choose to talk about your product/company/service, it's because it's remarkable. -- Seth Godin's keynote (one of the highlights of the conference!) The Web will become the primary method used for delivering content and applications to mobile devices. By 2015, 20% of smart phone users worldwide will conduct commerce using context-enriched services on a weekly basis.  86% of customers will pay more for a better customer experience. 6 P's of Quality User Experience. Product. Enabled by: People, Patterns, Process, Profit, Priorities. Did you attend the Gartner Summit? What were your ah-ha moments?

    Read the article

  • questions about dual-boot install Ubuntu 10.04 and Windows 7 on same hard drive

    - by Tim
    I'd like to dual-boot install Ubuntu 10.04 on the same hard drive as Windows 7 which has already been installed. As to sources on the internet: I found a website iinet about dual-boot installation of Ubuntu 10.10 and Windows 7 on the same hard drive, which I think more specific than the one on Ubuntu Community without specific version of the OSes. Since I am installing Ubuntu 10.04 instead of 10.10, my question is whether their installers are same or almost same and if I can follow iinet for my dual-boot installation? Or are there better websites for information about dual-boot installtion of Ubuntu 10.04 and Windows 7? As to shrinking Windows partitions to make free space for Ubuntu partitions: iinet uses the partition software in Ubuntu's installer to shrink the Windows partition. But I saw in many website that the partition software in Ubuntu's installer cannot guarantee shrinking Windows 7 partitions successfully, so they recommended in general to shrink Windows partitions under Windows itself using its softwares. For example, in Ubuntu Community, it says: Some people think that the Windows partition must be resized only from within Windows Vista and Windows 7 using the shrink/resize option. ... If you use GParted Partition Editor in the Ubuntu Live CD be careful. So I was wondering which way to go in my situation? As to partition for bootloader files: In iinet, I don't see there is a partition created and dedicated to boot files (i.e. Grub files). However, I saw in many websites strongly suggesting using a boot partition for Grub files, especially for the purpose of separation and protection from installed OS files. I was wondering which way I should choose and why? As to installing bootloader Grub, in iinet, I see that to install Grub it only needs to specify the hard drive device for bootloader installation. However, in ubuntuguide(for more than 2 OSes and Ubuntu 9.04), some commands are needed to run in order to put Grub configuration files in MBR, and OS partition, for the chain-load process (where to find the files for the next stage). In Ubuntu Community, there are some related sentences which I don't quite understand how to do in practice: the only thing in your computer outside of Ubuntu that needs to be changed is a small code in the MBR (Master Boot Record) of the first hard disk. The MBR code is changed to point to the boot loader in Ubuntu. If you have a problem with changing the MBR code, you might prefer to just install the code for pointing to GRUB to the first sector of your Ubuntu partition instead. If you do that during the Ubuntu installation process, then Ubuntu won't boot until you configure some other boot manager to point to Ubuntu's boot sector. Windows Vista no longer utilizes boot.ini, ntdetect.com, and ntldr when booting. Instead, Vista stores all data for its new boot manager in a boot folder. Windows Vista ships with an command line utility called bcdedit.exe, which requires administrator credentials to use. You may want to read http://go.microsoft.com/fwlink/?LinkId=112156 about it. Using a command line utility always has its learning curve, so a more productive and better job can be done with a free utility called EasyBCD, developed and mastered in during the times of Vista Beta already. EasyBCD is user friendly and many Vista users highly recommend EasyBCD. In what is quoted above, I was wondering how exactly I should change the MBR code to point to the bootloader in Ubuntu? if I fail to change MBR code, are the other suggested boot managers being bcdedit.exe and EasyBCD in Windows? With the three sources above, which one shall I follow? Thanks and regards

    Read the article

  • Data Auditor by Example

    - by Jinjin.Wang
    OWB has a node Data Auditors under Oracle Module in Projects Navigator. What is data auditor and how to use it? I will give an introduction to data auditor and show its usage by examples. Data auditor is an important tool in ensuring that data quality levels meet business requirements. Data auditor validates data against a set of data rules to determine which records comply and which do not. It gathers statistical metrics on how well the data in a system complies with a rule by auditing and marking how many errors are occurring against the audited table. Data auditors are typically scheduled for regular execution as part of a process flow, to monitor the quality of the data in an operational environment such as a data warehouse or ERP system, either immediately after updates like data loads, or at regular intervals. How to use data auditor to monitor data quality? Only objects with data rules can be monitored, so the first step is to define data rules according to business requirements and apply them to the objects you want to monitor. The objects can be tables, views, materialized views, and external tables. Secondly create a data auditor containing the objects. You can configure the data auditor and set physical deployment parameters for it as optional, which will be used while running the data auditor. Then deploy and run the data auditor either manually or as part of the process flow. After execution, the data auditor sets several output values, and records that are identified as not complying with the defined data rules contained in the data auditor are written to error tables. Here is an example. We have two tables DEPARTMENTS and EMPLOYEES (see pic-1 and pic-2. Click here for DDL and data) imported into OWB. We want to gather statistical metrics on how well data in these two tables satisfies the following requirements: a. Values of the EMPLOYEES.EMPLOYEE_ID attribute are three-digit numbers. b. Valid values for EMPLOYEES.JOB_ID are IT_PROG, SA_REP, SH_CLERK, PU_CLERK, and ST_CLERK. c. EMPLOYEES.EMPLOYEE_ID is related to DEPARTMENTS.MANAGER_ID. Pic-1 EMPLOYEES Pic-2 DEPARTMENTS 1. To determine legal data within EMPLOYEES or legal relationships between data in different columns of the two tables, firstly we define data rules based on the three requirements and apply them to tables. a. The first requirement is about patterns that an attribute is allowed to conform to. We create a Domain Pattern List data rule EMPLOYEE_PATTERN_RULE here. The pattern is defined in the Oracle Database regular expression syntax as ^([0-9]{3})$ Apply data rule EMPLOYEE_PATTERN_RULE to table EMPLOYEES.

    Read the article

  • Selecting the correct installer to install Oracle Weblogic Server

    - by PratikS -- Oracle
    When ever we start learning about a software product, the first step is to get the software installer and install it.Before we start with "How to install Oracle Weblogic Server?", lets understand the different kinds of installers available for Oracle Weblogic Server and select the correct installer.There are three different kinds of Weblogic server Installers: Package Installer Development-only and supplemental installers Upgrade Installer 1) Package Installer: If you have never installed Oracle Weblogic Server and this is the first time you are installing it, then what you need is a Oracle Weblogic Server's Package Installer.Again there are two different kinds of Package Installers:    a) Generic Package installer:         It does not include JAVA runtime. (When using "Generic Package installer" it is a prerequisite that a supported JDK should be installed)         If you want to install weblogic server with 64bit JVM, you have to use "Generic Package installer".         "Generic Package installer" is platform independent and can be used to install weblogic server on any supported 32bit or 64bit platform.     b) OS-specific Package installer         As the name suggests the installer is platform specific.         It is meant for installation with a 32bit JVM only.         Both SUN and JROCKIT 32 bit JDKs come bundled with "OS-specific Package installer", so no need to install the JDK in advance. 2) Development-only and supplemental installers:         If you have no plans to use the Oracle Weblogic Server in Production and need a simple installer for testing purpose only, then use this installer.         Download the zip distribution, unzip it and its ready to use. 3) Upgrade Installer:         Upgrade installer is used to upgrade a Oracle Weblogic Server installation from one minor version to a higher minor version.         There are no installers available to upgrade Oracle Weblogic Server Installation from one major version to another, though Domain Upgrade is always available. Note:Following are the different versions of Oracle Weblogic Server in ascending order(excluding versions before WLS 9.2): WLS 9.2.x WLS 10.0.x WLS 10.3.x WLS 12.1.x Where "x" denotes the minor version, 9.2, 10.0,10.3 and 12.1 are the major versions.So you may use the upgrade installer to upgrade from WLS 10.3.1 to 10.3.6, or 10.0.1 to 10.0.2 etc.  ------------------------------------- Important links to refer: Oracle Weblogic Server Documentation Supported Configuration Installation Guide for Oracle WebLogic Server

    Read the article

  • The Social Business Thought Leaders

    - by kellsey.ruppel
    Enterprise Gamification, Big Data, Social Support, Total Customer Experience, Pull Organizations, Social Business. Are these purely the latest buzzwords to enter the market or significant trends that companies should keep an eye on? Oracle recently sponsored and presented at the 5th Social Business Forum, one of the largest European events on the use of social media as a business tool and accelerator. Through the participation of dozens of practitioners, experts and customer success stories, the conference demonstrated how a perfect storm of technology, management and cultural change is pushing peer-to-peer conversations deep into business processes. It is clear that Social Business is serving as a new propellant of agility, efficiency and reactivity. According to Deloitte and MIT what we have learned to call Social Business is considered important in the next 3 years by 86% of managers (see Social Business: What Are Companies Really Doing?, MIT Sloan Management Review and Deloitte). McKinsey further estimates the value that can be unlocked in terms of knowledge-worker productivity, consumer insights, product co-creation, improved sales, marketing and customer service up to $1300B (See The social economy: Unlocking value and productivity through social technologies, McKinsey Global Institute). This impacts any industry, with the strongest effects seen in Media & Entertainment, Technology, Telcos and Education. For those not able to attend the Social Business Forum and also for the many friends that joined us in Milan, we decided to keep the conversation going by extracting some golden nuggets from the perspective of five of the most well-known thought-leaders in this space. Starting this week you will have the chance to view: John Hagel (Author of the Power of Pull and Co-Chairman Center for the Edge at Deloitte & Touche) Christian Finn (Senior Director, WebCenter Evangelist at Oracle) Steve Denning (Author of The Radical Management and Independent Management Consulting Professional) Esteban Kolsky (Principal & Founder at ThinkJar) Ray Wang (Principal Analyst & CEO at Constellation Research) Stay tuned to hear: How pull organizations are addressing some of the deepest challenges impacting the market. How to integrate social into existing infrastructure and processes. How to apply radical management to become more agile and profitable. About the importance of gamification as an engagement lever. The first interview with John Hagel will be published tomorrow. Don't miss it and the entire series!

    Read the article

  • how to convert avi (xvid) to mkv or mp4 (h264)

    - by bcsteeve
    Very noob when it comes to video. I'm trying to make sense of what I"m finding via Google... but its mostly Greek to me. I have a bunch of Avi files that won't play in my WD TV Play box. Mediainfo tells me they are xvid. Specs for the box show that should be fine... but digging through forums says its hit-and-miss. So I'd like to try converting them to h264 encoded MKV or mp4 files. I gather avconv is the tool, but reading the manual just has me really really confused. I tried the very basic example of: avconv -i file.avi -c copy file.mp4 it took less than 4 seconds. And it worked... sort of. It "played" in that something came up on the screen... but there was horrible artifacting and scenes would just sort of melt into each other. I want to preserve quality if possible. I'm not concerned about file size. I'm not terribly concerned with the time it takes either, provided I can do them in a batch. Can someone familiar with the process please give me a command with the options? Thank you for your help. I'm posting the mediainfo in case it helps: General Complete name : \\SERVER\Video\Public\test.avi Format : AVI Format/Info : Audio Video Interleave File size : 189 MiB Duration : 11mn 18s Overall bit rate : 2 335 Kbps Writing application : Lavf52.32.0 Video ID : 0 Format : MPEG-4 Visual Format profile : Advanced Simple@L5 Format settings, BVOP : 2 Format settings, QPel : No Format settings, GMC : No warppoints Format settings, Matrix : Default (H.263) Muxing mode : Packed bitstream Codec ID : XVID Codec ID/Hint : XviD Duration : 11mn 18s Bit rate : 2 129 Kbps Width : 720 pixels Height : 480 pixels Display aspect ratio : 16:9 Frame rate : 29.970 fps Standard : NTSC Color space : YUV Chroma subsampling : 4:2:0 Bit depth : 8 bits Scan type : Progressive Compression mode : Lossy Bits/(Pixel*Frame) : 0.206 Stream size : 172 MiB (91%) Writing library : XviD 1.2.1 (UTC 2008-12-04) Audio ID : 1 Format : MPEG Audio Format version : Version 1 Format profile : Layer 3 Mode : Joint stereo Mode extension : MS Stereo Codec ID : 55 Codec ID/Hint : MP3 Duration : 11mn 18s Bit rate mode : Constant Bit rate : 192 Kbps Channel(s) : 2 channels Sampling rate : 48.0 KHz Compression mode : Lossy Stream size : 15.5 MiB (8%) Alignment : Aligned on interleaves Interleave, duration : 24 ms (0.72 video frame)

    Read the article

  • SQL Authority News – FalafelCON 2014: 2 days with the Best Developers in the World

    - by Pinal Dave
    I love presenting at various forums on various technologies. I am extremely excited that I got invited to speak at Falafel Conference 2014 in San Francisco. I will present two technology sessions on SQL Server. If you are into web development or if you just want to attend a conference with the best of the industry speakers, this may be the right conference for you. What set apart this conference from other conference is technology presented as well as speakers. Usually one has to attend very expensive and high scale event when they have to hear good speakers. At this conference, you will find quite a many industry legends are available to present on the bleeding edge technology. Here are few of the reasons why I believe you should attend this conference: Choose from four tracks covering Web, Mobile development and testing, Sitefinity, and Automated Testing, or attend sessions from all four! Learn from the best developers and testers in the business in an intimate setting. Surround yourself with your peers and the opportunity to network Learn about the latest platforms and technologies including Kendo UI, AngularJS, ASP.NET MVC, WebAPI, and more! Here are the details for the sessions which I am going to present at Falafel Conference. Secrets of SQL Server: Database Worst Practices Abstract: Chances are you have heard, or even uttered, this expression. This demo-oriented session will show many examples where database professionals were dumbfounded by their own mistakes, and could even bring back memories of your own early DBA days. The goal of this session is to expose the small details that can be dangerous to the production environment and SQL Server as a whole, as well as talk about worst practices and how to avoid them. Shedding light on some of these perils and the tricks to avoid them may even save your current job. After attending this session, Developers will only need 60 seconds to improve performance of their database server in their SharePoint implementation. We will have a quiz during the session to keep the conversation alive. Developers will walk out with scripts and knowledge that can be applied to their servers, immediately post the session. Additionally, all attendees of the session will have access to learning material presented in the session. The Unsung Hero Abstract: Slow Running Queries are the most common problem that developers face while working with SQL Server. While it is easy to blame the SQL Server for unsatisfactory performance, however the issue often persists with the way queries have been written, and how Indexes has been set up. The session will focus on the ways of identifying problems that slow down SQL Server, and Indexing tricks to fix them. Developers will walk out with scripts and knowledge that can be applied to their servers, immediately post the session. Register Now! I have learned from the Falafel Team that they are running out of tickets and soon they will close the registration.  For next 10 days the price for the registration is only USD 149. Trust me, you can’t get such a world class training and networking opportunity at such a low price. Click to Register Here! Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL

    Read the article

  • Disneyland Inside Out on iPhone and Android

    - by Ryan Cain
    It's hard to believe October was the last time I was over here on my blog.  Ironically after getter the developer phone from Microsoft I have been knee deep in iPhone programming and for the past few weeks Android programming again.  This time I've spent all my non-working hours programming a fun project for my "other" website, Disneyland Inside Out.  Disneyland Inside Out, a vacation planning site for Disneyland in California, has been around in various forms since June 1996.  It has always been a place for me to explore new technologies and learn about some of the new trends on the web.  I recently migrated the site over to DotNetNuke and have been building out custom modules for DNN.  I've also been hacking things together w/ the URLRewrite module in IIS 7.5 to provide strong SEO optimized URLs.  I can't say all that has really stuck within the DNN model of doing things, but it has worked pretty well. As part of my learning process, I spent most of the Fall bringing Disneyland Inside Out to the iPhone.  I will post more details on my development experiences later.  But this project gave me a really great opportunity to get a good feel for Objective-C development.  After 3 months I actually feel somewhat competent in the language and iPhone SDK, instead of just floundering around getting things to work.  The project also gave me a chance to play with some new frameworks on the iPhone and really dig into the Facebook SDK.  I also dug into some of the Gowalla REST api's as well.  We've been live with the app in iTunes for just about 10 days now, and have been sitting in the top 200 of free travel apps for the past few days.  You can get more info and the direct iTunes download link on our site: Disneyland Inside Out for iPhone Since launching the iPhone version I have gotten back into Android development, porting the Disneyland Inside Out app over to Android.  As I said in my first review of iPhone vs. Android, coming from a managed code background, Android is much easier to get going with.  I just about 3 weeks total I will have about 85 - 90% of the functionality up and running in the Android app, that took probably 1.5 - 2x's that time for iPhone.  That isn't a totally fair comparison as I am much more comfortable w/ Xcode and Objective-C today and can get some of the basic stuff done much faster than I could in the fall.  Though I'd say some of the hardest code to debug is still the null pointer issues on objects that were dealloc'd too early in Objective-C.  This isn't too bad with the NSZoombies enabled for synchronous code, but when you have a lot of async, which my app does, it can be hairy at times to track exactly what was causing the issue.   I will post more details later, as I am trying to wrap up a beta of the Android app today.  But in the meantime, if you have an iPhone, iPod Touch or iPad head on over to the site and take a look at my app.

    Read the article

  • 2D Grid Map Connectivity Check (avoiding stack overflow)

    - by SombreErmine
    I am trying to create a routine in C++ that will run before a more expensive A* algorithm that checks to see if two nodes on a 2D grid map are connected or not. What I need to know is a good way to accomplish this sequentially rather than recursively to avoid overflowing the stack. What I've Done Already I've implemented this with ease using a recursive algorithm; however, depending upon different situations it will generate a stack overflow. Upon researching this, I've come to the conclusion that it is overflowing the stack because of too many recursive function calls. I am sure that my recursion does not enter an infinite loop. I generate connected sets at the beginning of the level, and then I use those connected sets to determine connectivity on the fly later. Basically, the generating algorithm starts from left-to-right top-to-bottom. It skips wall nodes and marks them as visited. Whenever it reaches a walkable node, it recursively checks in all four cardinal directions for connected walkable nodes. Every node that gets checked is marked as visited so they aren't handled twice. After checking a node, it is added to either a walls set, a doors set, or one of multiple walkable nodes sets. Once it fills that area, it continues the original ltr ttb loop skipping already-visited nodes. I've also looked into flood-fill algorithms, but I can't make sense of the sequential algorithms and how to adapt them. Can anyone suggest a better way to accomplish this without causing a stack overflow? The only way I can think of is to do the left-to-right top-to-bottom loop generating connected sets on a row basis. Then check the previous row to see if any of the connected sets are connected and then join the sets that are. I haven't decided on the best data structures to use for that though. I also just thought about having the connected sets pre-generated outside the game, but I wouldn't know where to start with creating a tool for that. Any help is appreciated. Thanks!

    Read the article

  • How to wrap console utils in webserver

    - by Alex Brown
    I have a big dataset (100Mbs/day) and a bunch of console a TCL/TK tools to view it - I want to turn it into a web app that I can build, and others can maintain. In long: my group runs simulations yielding 100s of Mbs of data daily, in multiple (mostly but not only) text forms. We have a bunch of scripts and tools, mostly old school 1990's style stuff requiring a 5-button mouse, as well as lots of ad-hoc scripts that engineers build out of frustration every month or so. These produces UIs, graphs, spreadsheets (various sizes), logs, event histories etc. I want to replace (or at least supplement) the xwindows / console style UI with a web-based one, so I need the following properties: pleasant to program can wrap existing command-line tools in separate views (I don't need to scrape GUIs or anything) as I port logic from the existing scripts I can create a modularised and pleasant codebase to replace it I can attach a web-ui to navigate between views - each view is likely to contain keys which might make sense to view in another I am new to building systems that have logic on the back-end and front-end of a web-server. from that point of view, they do this: backend wraps old-school executables, constructs calls into them and them takes the output and wraps it up, niceifies it and delivers it to the web client. For instance the tool might generate a number of indexed images (per invocation) which I might deliver all at once or on-demand. May (probably) need to to heavy stats on some sources. frontend provides navigation connecting multiple views, performs requests from one view for data from another (or self to self), etc. Probably will have some views with a lot of interactivity. Can people please point me towards viable solutions for this? I know it's a bit of an open question so as answers come in I hope to refine the spec until we have a good match. I guess I expect to see answers like "RoR!" "beans!" "Scala!" but please give an indication of why those are a good fit; I know nothing! I got bumped off SO for asking an open-ended question, so sorry if its OT here too (let me know). I take the policy that I use the best/closest matched language for a project but most of my team are extremely low level (ie pipeline stages and CDyn) so I don't have the peer group to know where to start.

    Read the article

  • Code Analysis Rule Sets in Visual Studio 2010

    - by Anthony Trudeau
    Microsoft Visual Studio 2010 introduces the concept of rule sets when configuring code analysis.  This is a valuable change from Visual Studio 2008 that I didn't even realize I wanted.  Visual Studio 2008 by default selected all rules and then you had to remove rules on an item by item basis. The rule sets fall into logical groups including "Microsoft All Rules", "Microsoft Basic Correctness Rules", "Microsoft Security Rules", et al.  And within the project properties you can select one rule set, multiple rule sets, or you can define your own rule set based upon another. Selecting a single rule set is obviously the easiest option.  The default rule set when you create a new project is the "Microsoft Minimum Recommended Rules".  However, in my opinion the recommended rules are just too permissive.  For that reason you might want to change your rule set to "Microsoft All Rules" until you get around to creating your own rule set; or alternately you can select multiple rule sets which is an option from the rule set combo box.  The Visual Studio documentation has comprehensive help on what is contained within the rule sets. Creating your own rule set is easy if not obvious.  You need to start a rule set from an existing rule set.  To get started select a rule set in the combo box within the Code Analysis tab of the project properties.  I selected the "Microsoft All Rules" for my rule set, but you may find it easier to start with the "Microsoft Minimum Recommended Rules" if your rules are on the more permissive side. Once your rule set is selected click the Open button.  This will display a dialog that is similar in composition to the rules selection from Visual Studio 2008.  Browsing through the tree view you can select or deselect individual rules within their categories; and you can indicate that the rules are flagged as errors instead of the default which is a warning.  A nice touch to the form is that you get a help pane when you select an individual rule.  That helped me considerably when I first configured my rule set. Once you have finished selecting your rules click the Save tool button, specify a location and name, and click the Save button on the Save As dialog.  Once you're back on the Code Analysis tab you'll choose the Browse option within the combo box and open the file you just created.

    Read the article

  • What if you could work on anything you wanted?

    - by red@work
    This week we've downed our tools and organised ourselves into small project teams or struck out alone. We're working on whatever we like, with whoever we like, wherever we like. We've called it Down Tools week and so far it's a blast. It all started a few months ago with an idea from Neil, our CEO. Neil wanted to capture the excitement, innovation, and productivity of Coding by the Sea and extend this to all Red Gaters working in Product Development. A brainstorm is always a good place to start for an "anything goes" project. Half of Red Gate piled into our largest meeting room (it's pretty big) armed with flip charts, post its and a heightened sense of possibility. An hour or so later our SQL Servery walls were covered in project ideas. So what would you do, if you could work on anything you wanted? Many projects are related to tools we already make, others are for internal product development use and some are, well, just something completely different. Someone suggested we point a web cam at the SQL Servery lunch queue so we can check it before heading to lunch. That one couldn't wait for Down Tools Week. It was up and running within a few days and even better, it captures the table tennis table too. Thursday is the Show and Tell - I am looking forward to seeing what everyone has come up with. Some of the projects will turn into new products or features so this probably isn't the time or place to go into detail of what is being worked on. Rest assured, you'll hear all about it! We're making a video as we go along too which will be up on our website as soon. In the meantime, all meetings are cancelled, we've got plenty of food in and people are being very creative with the £500 expenses budget (Richard, do you really need an iPad?). It's brilliant to see it all coming together from the idea stage to reality. Catch up with our progress by following #downtoolsweek on Twitter. Who knows, maybe a future Red Gate flagship tool is coming to life right now? By the way, it's business as usual for our customer facing and internal operations teams. Hmm, maybe we can all down tools for a week and ask Product Development to hold the fort? Post by: Alice Chapman

    Read the article

  • How Big Data and Social Won the Election

    - by Mike Stiles
    The story of big data’s influence on the outcome of the US Presidential election is worth a good look, because a) it’s a harbinger of things to come, and b) it’s an example of similar successes available to any enterprise seriously resourcing integrated big data, modeling, and data-driven execution on all assets, including social. Obama campaign manager Jim Messina fielded a data and analytics brain trust 5 times larger than 2008. At that time, there were numerous databases from various sources, few of them talking to each other. This time, the mission was to be metrics-centered and measure everything measurable, and in context with all the other data. Big data showed them exactly what they needed to know and told them what to do about it. It showed them women 40-49 on the west coast would donate big money if they got to eat with George Clooney. Women on the east coast would pony up to hang out with Sarah Jessica Parker. Extensive daily modeling showed them what kinds of email appeals, from who, and to whom, would prove most successful in raising cash, recruiting volunteers, and getting out the vote. Swing state voters were profiled and approached with more customized targeting that at any time in history. Ads were purchased on specific shows watched by the targets, increasing efficiency 14% over traditional media buys. For all the criticism of the candidate’s focus on appearing on comedy and entertainment shows, and local radio morning shows, that’s where the data sent them to reach the voters most likely to turn out for them. And then there was social. Again, more than in any other election, Facebook was used for virtual, highly efficient door-to-door canvasing. Facebook fans got pictures of friends in swing states and were asked to encourage them to act. Using that approach, 1 in 5 peer-to-peer appeals led to the desired action. Assumptions, gut, intuition, campaign experience, all took a backseat to strategy shifts solidly backed up by data. Zeroing in on demographics likely to back the President and tracking their mood daily literally changed the voter landscape. The Romney team watched Obama voters appear seemingly out of thin air. One Obama campaign aide said, “We ran the election 66,000 times every night.” Which brings us to your organization. If you’re starting to feel like the battle-cry of “but this is the way we’ve always done it” is starting to put you in an extremely vulnerable position, you’re right. Social has become a key communication tool of the 21st century. Failing to use it, or failing to invest in a deep understanding of who your customers and prospects are so the content you post there will achieve desired actions and results, will leave you waking up one morning wondering, “What happened?”@mikestilesPhoto stock.xchng

    Read the article

< Previous Page | 553 554 555 556 557 558 559 560 561 562 563 564  | Next Page >