Search Results

Search found 3533 results on 142 pages for 'jms topic'.

Page 102/142 | < Previous Page | 98 99 100 101 102 103 104 105 106 107 108 109  | Next Page >

  • Oracle WebCenter: Composite Applications & Mash-Ups

    - by kellsey.ruppel(at)oracle.com
    We’ve talked in previous weeks about the key goals of the new release of WebCenter are providing a Modern User Experience, unparalleled Application Integration, converging all the best of the existing portal platforms into WebCenter and delivering a Common User Experience Architecture.  We’ve provided an overview of Oracle WebCenter and discussed some of the other key goals in previous weeks, and this week, we’ll focus on how with the new release of Oracle WebCenter you can create composite applications and mashups.We recently talked with Sachin Agarwal, Director of Product Management of Enterprise 2.0 at Oracle around the topic of Composite Applications and Mashups. Oracle WebCenter provides a rich set of tools and capabilities for pulling in content, applications and collaboration functionality from various different sources and weaving them together into what we call Mashups. Mashups that also consists of transactional applications from multiple sources are specifically called Composite Applications. With the latest release of Oracle WebCenter one can develop highly productive tasked based interfaces that aggregate a related set of applications that are part of a business process and provide in context collaboration tools so that users don’t have to navigate away to different tabs to achieve these tasks. For instance, a call center representative (CSR), not only needs to be able to pull customer information from a CRM application like Siebel, but also related information from Oracle E-Business Suite about whether a specific order has shipped. The CSR will be far more efficient if he or she does not have to open different tabs to login into multiple applications while the customer is waiting, but can access all this information in one mashup.Oracle WebCenter Suite provides a comprehensive set of tooling that enables a business user to quickly aggregate together a mashup and wire-in different backend applications to create a custom dashboard. Not only does Oracle WebCenter supports a wide set of standards (WSRP 1.0, 2.0, JSR 168, JSR 286) that allow portlets  from other applications to be surfaced within WebCenter, but it also provides tools to bring in other web applications such as .Net Applications  as well as SharePoint webparts. The new Business Mash-up editor allows business users to take any Oracle Application or 3rd party application and wire the backend data sources or APIs to a rich set of visualizations and reuse them in mashups.  Moreover, Business users can customize or personalize any page using Oracle WebCenter Composer’s on-the-fly visual page editing features. Users access and select different resource components available in Oracle WebCenter’s Business Dictionary in order to add new content to the page. The Business Dictionary provides a role-based view of available components or resources, and these components can include information from a variety of enterprise resources such as enterprise applications, managed content, rich media, business processes, or business intelligence systems. Together, Oracle WebCenter’s Composer and Business Dictionary give users access to a powerful, yet easy to use, set of tools to personalize and extend their Oracle WebCenter portals and applications without involving IT.Keep checking back this week as we share more information on how you can easily create Commposite Applications and Mashups with Oracle WebCenter .Technorati Tags: UXP, collaboration, enterprise 2.0, modern user experience, oracle, portals, webcenter, applications, mashups, composite applications

    Read the article

  • Correct way to handle path-finding collision matrix

    - by Xander Lamkins
    Here is an example of me utilizing path finding. The red grid represents the grid utilized by my A* library to locate a distance. This picture is only an example, currently it is all calculated on the 1x1 pixel level (pretty darn laggy). I want to make it so that the farther I click, the less accurate it will be (split the map into larger grid pieces). Edit: as mentioned by Eric, this is not a required game mechanic. I am perfectly fine with any method that allows me to make this accurate while still fast. This isn't the really the topic of this question though. The problem I have is, my current library uses a two dimensional grid of integers. The higher the number in a cell, the more resistance for that grid tile. Currently I'm setting all unwalkable spots to Integer Max. Here is an example of what I want: I'm just not sure how I should set up the arrays of integers of the grid. Every time an element is added/removed to/from the game, it's collision details are updated in the table. Here is a picture of what the map looks like on my collision layer: I probably shouldn't be creating new arrays every time I have to do a path find because my game needs to support tons of PF at the same time. Should I have multiple arrays that are all updated when the dynamic elements are updated (a building is built/a building is destroyed). The problem I see with this is that it will probably make the creation and destruction of buildings a little more laggy than I would want because it would be setting the collision grid for each built in accuracy level. I would also have to add more/remove some arrays if I ever in the future changed the map size. Should I generate the new array based on an accuracy value every time I need to PF? The problem I see with this is that it will probably make any form of PF just as laggy because it will have to search through a MapWidth x MapHeight number of cells to shrink it all down. Or is there a better way? I'm certainly not the best at optimizing really anything. I've just started dealing with XNA so I'm not used to having optimization code really doing much of an affect until now... :( If you need code examples, please ask. I'll add it as an edit. EDIT: While this doesn't directly relate to the question, I figure the more information I provide, the better. To keep your units from moving as accurately to the players desired position, I've decided that once the unit PFs over to the less accurate grid piece, it will then PF on a more accurate level to the exact position requested.

    Read the article

  • Games at Work Part 2: Gamification and Enterprise Applications

    - by ultan o'broin
    Gamification and Enterprise Applications In part 1 of this article, we explored why people are motivated to play games so much. Now, let's think about what that means for Oracle applications user experience. (Even the coffee is gamified. Acknowledgement @noelruane. Check out the Guardian article Dublin's Frothing with Tech Fever. Game development is big business in Ireland too.) Applying game dynamics (gamification) effectively in the enterprise applications space to reflect business objectives is now a hot user experience topic. Consider, for example, how such dynamics could solve applications users’ problems such as: Becoming familiar or expert with an application or process Building loyalty, customer satisfaction, and branding relationships Collaborating effectively and populating content in the community Completing tasks or solving problems on time Encouraging teamwork to achieve goals Improving data accuracy and completeness of entry Locating and managing the correct resources or information Managing changes and exceptions Setting and reaching targets, quotas, or objectives Games’ Incentives, Motivation, and Behavior I asked Julian Orr, Senior Usability Engineer, in the Oracle Fusion Applications CRM User Experience (UX) team for his thoughts on what potential gamification might offer Oracle Fusion Applications. Julian pointed to the powerful incentives offered by games as the starting place: “The biggest potential for gamification in enterprise apps is as an intrinsic motivator. Mechanisms include fun, social interaction, teamwork, primal wiring, adrenaline, financial, closed-loop feedback, locus of control, flow state, and so on. But we need to know what works best for a given work situation.” For example, in CRM service applications, we might look at the motivations of typical service applications users (see figure 1) and then determine how we can 'gamify' these motivations with techniques to optimize the desired work behavior for the role (see figure 2). Description of Figure 1 Description of Figure 2 Involving Our Users Online game players are skilled collaborators as well as problem solvers. Erika Webb (@erikanollwebb), Oracle Fusion Applications UX Manager, has run gamification events for Oracle, including one on collaboration and gamification in Oracle online communities that involved Oracle customers and partners. Read more... However, let’s be clear: gamifying a user interface that’s poorly designed is merely putting the lipstick of gamification on the pig of work. Gamification cannot replace good design and killer content based on understanding how applications users really work and what motivates them. So, Let the Games Begin! Gamification has tremendous potential for the enterprise application user experience. The Oracle Fusion Applications UX team is innovating fast and hard in this area, researching with our users how gamification can make work more satisfying and enterprises more productive. If you’re interested in knowing more about our gamification research, sign up for more information or check out how your company can get involved through the Oracle Usability Advisory Board. Your thoughts? Find those comments.

    Read the article

  • Three Master Data Management Deployment Tips

    - by david.butler(at)oracle.com
    MDM is all about data quality and data governance. We now know that improved data quality raises all operational and analytical boats. But it's not just about deploying data quality tools. It's about deploying data quality tools within and across the IT landscape - from a thousand points of data entry to a single version of the truth. Here are three tips to deploying MDM across your applications and enterprise.   #1: Identify a tactical, high-value business problem where MDM can materially help. §  Support a customer acquisition and retention program with a 'customer' master data solution. §  Accelerate new products and services to market with a 'product' master data solution. §  Reduce supplier exceptions or support spend control initiatives with a 'supplier' master data solution. §  Support new store (branch, campus, restaurant, hospital, office, well head) location analysis with a 'site' master data solution. §  Fix long standing Chart of Accounts and Cost Center problems with a 'financial' master data solution. §  Support M&A activity, application upgrades, an SOA initiative, a cloud computing program, or a new business intelligence deployment by implementing a mix of master data solutions.   #2: Incrementally expand to a full information architecture. Quite often, the measurable return on interest from tactical MDM initiatives will fund future deployments. Over time, the MDM solution expands into its full architecture to cover the entire IT landscape. Operations and analytics are united, IT flexibility is restored, and sustainable competitive advantage is achieved.   #3: Bring business into every MDM deployment. To be successful, MDM must work hand in hand with data governance. In fact, Oracle MDM incorporates data governance tools for business users. IT can insure data quality, but only after the business side has defined what quality means. The business establishes the rules for governing the master data, and then IT enforces the rules via the MDM applications. Without this business/IT collaboration, MDM initiatives seldom achieve their full potential.   It is not very often that a technology comes along that can measurably assist organizations across a wide variety of top IT initiatives. Reducing costs, increasing flexibility, getting more out of existing assets, and aligning business and IT are not easy tasks for any CIO. But with MDM, success is achievable. IT can regain its place as a center for innovation.   For more information on this topic, take a look at my article Master Data Management Deployment Tips in the Opinion Section of Oracle's Profit Online magazine.

    Read the article

  • Meaning of Crawl errors

    - by com
    My question is about definition of Crawl errors in Google Webmaster Tools. Crawl errors is devided into few sections. Let's first consider HTTP section. I assume that all broken links in this section was somehow found by crawler, this is not the links from sitemap. If all this links was found by scanning pages from sitemap for links, why it doesn't mention what was the source page, like in sitemap section with column Linked From. Please correct me if I am wrong. Sitemap section. Looks like all those links came from my sitemap. But there is Linked From column, I already know, that all those broken links is from sitemap, so in order to fix the error, I should revise my sitemap. Am I wrong? Not followed section. I don't know what does it mean. Looks like it accumulates all links that caused redirect, but for some reason Google considers all those redirect as wrong redirect. Do you know if there are any set of rules how to determine wrong redirect. Actually I found were was my mistake, I tried to normalize URL and redirect it to the right URL, but I did normalization in a wrong way. Not found section. This section like HTTP section but with 404 errors. This section has Linked From column. But very often Linked From has unavailable. What does it mean, Google can not say me how it found this non existing page. How this section related to sitemap section. Does this section contains all 404 links from sitemap too. But there is too many 404 links, much more than in sitemap. I tried to take a look what we have in Linked From, and I saw that this link came from sitemap two month ago. But why Google keeps it indexed, the link is already dead, new sitemap doesn't have it. If there is any expire date for old links? Unreachable section. Looks like this section for 500 errors. This section doesn't contain Linked From column. There are too many completely meaningless links, I really don't know where this stuff came from, and without Linked From I am not able to figure out how to deal with it. Sorry for such a big topic, but I just want to make it clear, what every section stands for, because it's extremely crucial in order to deal with all those problems. Hopefully it will be useful not just for me. Thanks!

    Read the article

  • Partial Submit vs. Auto Submit

    - by Frank Nimphius
    Partial Submit ADF Faces adds the concept of partial form submit to JavaServer Faces 1.2 and beyond. A partial submit actually is a form submit that does not require a page refresh and only updates components in the view that are referenced from the command component PartialTriggers property. Another option for refreshing a component in response to a partial submit is call AdfContext.getCurrentInstance.addPartialTarget(component_instance_handle_goes_here)in a managed bean. If a form contains required fields that the user left empty invoking the partial submit, then errors are shown for each of the field as the full form gets submitted. Autosubmit An input component that has its autosubmit property set to true also performs a partial submit of the form. However, this time it doesn't submit the entire form but only the component that triggers the submit plus components referenced it in their PartialTriggers property. For example, consider a form that has three input fields inpA, inpB and inpC with autosubmit=true set on inpA and required=true set on inpB and inpC. use case 1: Running the view, entering data into inpA and then tabbing out of the field will submit the content for inpA but not for inpB and inpC. Further more, none of the required field settings on inpB and inpC causes an error. use case 2: You change the configuration of inpC and set its PartialTriggers property to point to the ID of component inpA. When rerunning the sample, entering a value into inpA and tabbing out of the field will now submit the inpA and inpC fields and thus show an error for the missing required value on inpC. Internally, using autosubmit=true on an input component sets the event root to just this field, which good to have in case of dependent field validation or behavior. The event root can extended to include other components by using the Partial Triggers property on these components to point to the input field that has autosubmit=true defined. PartialSubmit vs. AutoSubmit Partial submit set on a command component submits the whole form and leaves it to the developer to decide which UI component is refreshed in response. Client side required field validation (as well as the server side equivalent) is not disabled by executed in this scenario. Setting immediate=true on the command item to skip validation doesn't help as it would also skip the model update. Auto submit is a functionality on the input components and also performs a partial form submit. However, in addition an event root is defined that narrows the scope for the submitted data and thus the components that are validated on the request. To read more about this topic, see: http://docs.oracle.com/cd/E23943_01/web.1111/b31973/af_lifecycle.htm#CIAHCFJF

    Read the article

  • Live Virtual Class for Partners: Application Management

    - by Patrick Rood
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} November 11-12th Manageability Partner Community Application Management Suite Live Virtual Training This training will be offered to Oracle Partners over a live webcast during business hours. Each day will consist of approximately 2-3 hours of lecture/demos. It will be recorded and available for playback. Purpose: This virtual course is a comprehensive program of training sessions, prepared and presented by Product Managers. This ensures you have all the information you need to position and sell Oracle Application Management Suites. The sessions will be lecture based with demonstrations to complement. These sessions are interactive and everyone will be required to participate. Customer case studies will be used as appropriate and there will be plenty of opportunity for in-depth discussion. Please bring to the training an understanding of what Enterprise Manager 12c does for our customers, along with your own experiences to date. Logistics: Topic: Oracle Application Management Suite Training (2 Days - approx 2-3 Hour per Day) WebEx session details to be provided upon registration. Monday 11th November | 14:00PM GMT | 18:00PM Gulf (GMT+4) Tuesday 12th November | 14:00PM GMT | 18:00PM Gulf (GMT+4) (Back to the top) Copyright © 2012, Oracle. All rights reserved. Contact Us | Legal Notices and Terms of Use | Privacy Statement Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";}

    Read the article

  • Highlights From Interact '12 - Healthcare Industry User Group

    - by John Webb
    Last week the Oracle team traveled to Orlando for the 18th annual Healthcare Industry User Group (HIUG) conference, Interact '12.   HIUG has over 3,000 members representing 180 organizations.  While we now know the result on the SCOTUS ruling yesterday, the consensus at the conference last week was summed up well in the welcome note from HIUG President, Chris Ryzewski:    "Regardless of the legal ruling on this administration's  Patient Protection and Affordable Care Act we will undoubtedly be called upon to further reduce costs and be more efficient in every aspect of our business processes."    Well put!   Attendance exceeded previous years with several hundred attendees, over 100 sessions, and a trade show that numbered 40 booths.    Most of the HIUG members use PeopleSoft applications and they tend to be full suite customers who use PeopleSoft broadly from HCM to Financials and Supply Chain. For many customers who have licensed PeopleSoft in the last year, it was their first experience with a very strong and collaborative user group.   I had dinner with a provider who is rolling out PeopleSoft HCM and ERP to a nationwide system of forty hospitals.  A key driver for this organization and others is how to leverage PeopleSoft applications to meet the cost reduction goals mentioned above.   In the area of procurement, the topic of Supplier Contract Management attracted a lot of attention.  Contract pricing and adherence to contracts throughout the procure to pay life cycle are key to meeting cost containment objectives.  Customers were excited to see the new faceted search capabilities and usability of  the upcoming PeopleSoft eProcurement release.     The new Work Center concept was discussed in several areas including the Cost Reconciliation Work Center and the Supply Demand Work Center which enables healthcare specific functions around PAR counts and related replenishment activities.  The latest Feature Pack of HCM 9.1 was demonstrated with the Talent Summary and Manager Dashboard.   Customers were excited to see the major advances in self service available today.    The Grants Special Interest Group focused quite a bit on the usage of PeopleSoft's Project Costing "Funds Distribution" feature, which can be used to manage capital projects funded by multiple agencies and sources.  Along with the latest release of the Mobile Inventory solution that several hospitals have now implemented, a preview of new mobile applications for expenses and approvals drew a lot of attention.   The PeopleSoft focus on assisting these companies in their goals to contain costs and create new efficiencies continues forward.   We look foward to Interact '13!     

    Read the article

  • Good, simple reasons for having a multiple environments

    - by smp7d
    Throughout my career I had worked at companies that had a collection of different environments for different purposes. We always had more or less our desktop environment, a test environment, a QA environment, a staging environment and a production environment. This went for both servers/applications and any data sources we were using. When I started at my current company I found that 90% of the apps were either developed on a desktop environment against production data sources or developed directly on the production server depending on the platform. I wasn't phased because I was hired in part to make changes to improve the way the development team functioned, which was clear from my interview process. We slowly started to turn the philosophy and pretty soon, most of the apps could be run in either a desktop, test or production environment. Not too long after that staging came around as well. Now most of our developers see the benefit of this methodology and defend it vigilantly. However, we have a number of legacy apps that never got migrated. We also have a number of legacy programmers who think of this as a waste of time. Unfortunately, we got lip service but never full buy-in from management. We got what we thought was a commitment to invest substantially in this about a year ago, but nothing materialized despite the considerable planning that we put into it. Now we are finding that we need more and more environments. We need help from the server/network administration teams for setup and we need participation from the business stakeholders to support the release cycle. We are at a place now where a project can function what I consider "normally" only if you have the right people on the project and the time to set up the proper environments. I'd love to present a complete argument, but management really has no time and interest in hearing me out until there is a critical issue. I cant really articulate the benefits simply as it always just seemed second nature to me. I was wondering if there are any good, simple, irrefutable reasons for the separation of environments that would get managers with no development experience to get behind this idea. Are there any good resources/literature on the topic?

    Read the article

  • TFS Hosting: discountasp.net TFS

    - by Enrique Lima
    In the last month or so I have been able to test and experience first hand the offering from discountasp.net for hosted TFS 2010. This first part is a description of the setup process for the account itself and getting some additional information on what you will find through the portal on their site. Not long ago, I posted a little tidbit on hosting TFS.  Through it I also did a shameless plug to my employer, our services and the type of hosting we recommend.  So, wouldn’t me running on discountasp.net be an issue?  Actually? NO. Ok, enough rambling.  Let’s get some details here. It is a Software as a Service model.  Through it we get Source Control, Version Control, Work Item Tracking and such.  What about Build?  If your need includes Build Management and such, you may need to look at some other options.  But, still this is a great offering for those that are moving from SourceSafe.  Or organizations who have 3 to 5 developers on staff, and do not foresee getting larger anytime soon.  Can it support more than 5 developers?  Yes, but then we need to get into how are you using TFS.  Do you need more than just Basic?  For example, SharePoint and Reporting Services integration. The signup process was seamless! Very easy to follow, complete and transition to Visual Studio to start working. An email followed the signup process, it contained details on how to get to the Team Foundation Server Control Panel login.  Once there, here is what I saw after the initial setup process of naming my Team Project Collection: So, moving on … once I clicked the area to get my server info, I got the following: Then it was a matter of getting the first user in there: Then on to connecting Visual Studio to my hosted TFS. Getting the server information, and the user account created I will configure those options in Visual Studio. Using Team Explorer, I am adding a new server configuration. Once this is provided, click OK, I will be challenged for a username and password, provide them and you will land on the following screen. Then Click Close. You will now be connected to your server and Team Project Collection. Since this will likely be the first time connecting, you will have no Projects (I already have 2 going). Click Connect, and you will be back in Team Explorer. My next post in the topic will be on Creating your First Team Project and uploading a Project Template to the server.

    Read the article

  • links for 2011-02-21

    - by Bob Rhubart
    Calling all enterprise architects | Enterprise architecture - InfoWorld Nominations are now open for the 2011 InfoWorld Enterprise Architecture Award, honoring companies whose enterprise architecture initiatives made a difference (tags: ping.fm) Red Tape, Part II : OTN Garage "How do you back up all of that storage? Tape: really fast tape. And, lots of it. This creates a whole variety of very interesting challenges today, elevating the topic to – at the very least – glamorous, but I think it qualifies as being downright hot!" - Kemer Thomson (tags: oracle entarch datastorage) The Buttso Blathers: Using Secure Config Files with the WebLogic Maven Plugin "WebLogic Server has long had a mechanism to provide a more secure way of connecting to the Administration Server from client utilities such that the username and password do not need to be specified and therefore can’t be seen from the process list or command shell history." (tags: oracle weblogic) World-class EA | Open Group Blog "World-class Enterprise Architecture is all about creating definitive collateral that defines how the architecture delivers value for societal value." - Mick Adams (tags: enterprisearchitecture entarch opengroup) Enterprise Process Maps: A Process Picture worth a Million Words (Telecommunications Architecture Corner) "Every BPM project (holistic BPM kick-off, enterprise system implementation, Service-oriented Architecture, business process transformation, corporate performance management, etc.) should be begin with a clear understanding of the business environment..." - Raul Goycoolea (tags: oracle otn telecommunications businessprocess entarch bpm) Andrejus Baranovskis's Blog: WebCenter PS3 Customization Manager- Long Awaited Feature for MDS Oracle ACE Director Andrejus Baranovski shares "really great news for those of you who are working on MDS personalization and customization support in Oracle Fusion Middleware applications." (tags: oracle otn oracleace webcenter enterprise2.0) Oracle WebCenter: Common User Experience Architecture (Oracle Enterprise 2.0 Blog) Kellsey Ruppel describes "how the new release of Oracle WebCenter delivers a Common User Experience Architecture." (tags: oracle otn webcenter enterprise2.0) Java / Oracle SOA blog: Do your SOA deployments & configuration with AIA Oracle ACE Edwin Biemond illustrates the use of the SOA Suite / FMW deployment framework, "one of the Application Integration Architecture (AIA) hidden gems." (tags: oracle oracleace soa otn fusionmiddleware) Enterprise Software Development with Java: Clustering Stateful Session Beans with GlassFish 3.1 Oracle ACE Director Markus Eisele describes what he did "to get a Stateful Session Bean failover scenario working with two instances on one node." (tags: oracle otn oracleace glassfish) Enhanced REST Support in Oracle Service Bus 11gR1 (SOA Thinker) Jeff Davies illustrates how to re-implement the REST-ful Products services using query strings for passing parameter information. (tags: oracle otn soa REST)

    Read the article

  • BizTalk 2010 Certification Exam

    - by Paul Petrov
    I took a shot at new (to me) certification exam for BizTalk 2010. I was able to pass it without any preparation just based on the experience. That does not mean this exam is a very simple one. Comparing to previous (2006 R2) it covers some new areas (like WCF) and has some demanding questions and situation to think about. But the most challenging factor is broad feature coverage. Overall, the impression that if BizTalk continues to grow in scope it’s better to create separate exams for core functionality and extended features (like EDI, RFID, LOB adapters) because it’s really hard to cover vast array of BizTalk capabilities. As far as required knowledge and questions allocation I think Microsoft description is on target. There were definitely more questions on deployment, configuration and administration aspects comparing to previous exam. WCF and WCF based adapters now play big role and this topic was covered well too. Extended functionality is claimed at 13% of the exam, I felt there were plenty of RFID questions but not many EDI, that’s why I thought it’d be useful to split exam into two to cover all of them equally. BRE is still there and good, cause it’s usually not very known/loved feature of the package. At the and, for those who plan to get certified, my advice would be to know all those areas of BizTalk for guaranteed passing: messaging and orchestrations, core adapters, routing, patterns; development of all artifacts and orchestrations; debugging and exceptions handling; packaging, deployment, tracking and administration; WCF bindings and adapters; BAM, BRE, RFID, EDI, etc. You may get by not knowing one smaller non-essential part (like I did with RFID, for example). In such case you better know all other areas very well to cover for the weak spot. If there more than one whiteouts in the knowledge it’s good idea to study and prepare: MSDN, blogs, virtual labs and good VM to play with can help when experience is not enough. So best wishes and good skill to you in passing this certification!

    Read the article

  • C-states and P-states : confounding factors for benchmarking

    - by Dave
    I was recently looking into a performance issue in the java.util.concurrent (JUC) fork-join pool framework related to particularly long latencies when trying to wake (unpark) threads in the pool. Eventually I tracked the issue down to the power & scaling governor and idle-state policies on x86. Briefly, P-states refer to the set of clock rates (speeds) at which a processor can run. C-states reflect the possible idle states. The deeper the C-state (higher numerical values) the less power the processor will draw, but the longer it takes the processor to respond and exit that sleep state on the next idle to non-idle transition. In some cases the latency can be worse than 100 microseconds. C0 is normal execution state, and P0 is "full speed" with higher Pn values reflecting reduced clock rates. C-states are P-states are orthogonal, although P-states only have meaning at C0. You could also think of the states as occupying a spectrum as follows : P0, P1, P2, Pn, C1, C2, ... Cn, where all the P-states are at C0. Our fork-join framework was calling unpark() to wake a thread from the pool, and that thread was being dispatched onto a processor at deep C-state, so we were observing rather impressive latencies between the time of the unpark and the time the thread actually resumed and was able to accept work. (I originally thought we were seeing situations where the wakee was preempting the waker, but that wasn't the case. I'll save that topic for a future blog entry). It's also worth pointing out that higher P-state values draw less power and there's usually some latency in ramping up the clock (P-states) in response to offered load. The issue of C-states and P-states isn't new and has been described at length elsewhere, but it may be new to Java programmers, adding a new confounding factor to benchmarking methodologies and procedures. To get stable results I'd recommend running at C0 and P0, particularly for server-side applications. As appropriate, disabling "turbo" mode may also be prudent. But it also makes sense to run with the system defaults to understand if your application exhibits any performance sensitivity to power management policies. The operating system power management sub-system typically control the P-state and C-states based on current and recent load. The scaling governor manages P-states. Operating systems often use adaptive policies that try to avoid deep C-states for some period if recent deep idle episodes proved to be very short and futile. This helps make the system more responsive under bursty or otherwise irregular load. But it also means the system is stateful and exhibits a memory effect, which can further complicate benchmarking. Forcing C0 + P0 should avoid this issue.

    Read the article

  • Simple collision detection in Unity 2D

    - by N1ghtshade3
    I realise other posts exist with this topic yet none have gone into enough detail for me. I am attempting to create a 2D game in Unity using C# as my scripting language. Basically I have two objects, player and bomb. Both were created simply by dragging the respective PNG to the stage. I have set up touch controls to move player left and right; gravity of any kind is not needed as I only require it to move x units when I tap either the left or right side of the screen. This movement is stored in a script called playerController.cs and works just fine. I also have a variable health = 3 for player, which is stored in healthScript.cs. I am now at a point where I am stuck. I would like it so that when player collides with bomb, health decreases by one and the bomb object is destroyed. So what I tried doing is using a new script called playerPhysics.cs, I added the following: void OnCollisionEnter2D(Collision2D coll){ if(coll.gameObject.name=="bomb") GameObject.Destroy("bomb"); healthScript.health -= 1; } While I'm fairly sure I don't know the proper way to reference a variable in another script and that's why the health didn't decrease when I collided, bomb never disappeared from the stage so I'm thinking there's also a problem with my collision. Initially, I had simply attached playerPhysics.cs to player. After searching around though, it appeared as though player also needed a rigidBody attached to it, so I did that. Still no luck. I tried using a circleCollider (player is a circle), using a rigidBody2D, and using all manner of colliders on one and/or both of the objects. If you could please explain what colliders (if any) should be attached to which objects and whether I need to change my script(s), that would be much more helpful than pointing me to one of the generic documentation examples I've already read. Also, if it would be simple to fix the health thing not working that would be an added bonus but not exactly the focus of this question. Bear in mind that this game is 2D; I'm not sure if that changes anything. Thanks!

    Read the article

  • Data structures for a 2D multi-layered and multi-region map?

    - by DevilWithin
    I am working on a 2D world editor and a world format subsequently. If I were to handle the game "world" being created just as a layered set of structures, either in top or side views, it would be considerably simple to do most things. But, since this editor is meant for 3rd parties, I have no clue how big worlds one will want to make and I need to keep in mind that eventually it will become simply too much to check, handling and comparing stuff that are happening completely away from the player position. I know the solution for this is to subdivide my world into sub regions and stream them on the fly, loading and unloading resources and other data. This way I know a virtually infinite game area is achievable. But, while I know theoretically what to do, I really have a few questions I'd hoped to get answered for some hints about the topic. The logic way to handle the regions is some kind of grid, would you pick evenly distributed blocks with equal sizes or would you let the user subdivide areas by taste with irregular sized rectangles? In case of even grids, would you use some kind of block/chunk neighbouring system to check when the player transposes the limit or just put all those in a simple array? Being a region a different data structure than its owner "game world", when streaming a region, would you deliver the objects to the parent structures and track them for unloading later, or retain the objects in each region for a more "hard-limit" approach? Introducing the subdivision approach to the project, and already having a multi layered scene graph structure on place, how would i make it support the new concept? Would you have the parent node have the layers as children, and replicate in each layer node, a node per region? Or the opposite, parent node owns all the regions possible, and each region has multiple layers as children? Or would you just put the region logic outside the graph completely(compatible with the first suggestion in Q.3) When I say virtually infinite worlds, I mean it of course under the contraints of the variable sizes and so on. Using float positions, a HUGE world can already be made. Do you think its sane to think beyond that? Because I think its ok to stick to this limit since it will never be reached so easily.. As for when to stream a region, I'm implementing it as a collection of watcher cameras, which the streaming system works with to know what to load/unload. The problem here is, i will be needing some kind of warps/teleports built in for my game, and there is a chance i will be teleporting a player to a unloaded region far away. How would you approach something like this? Is it sane to load any region to memory which can be teleported to by a warp within a radius from the player? Sorry for the huge question, any answers are helpful!

    Read the article

  • 2011 - ALMs for your development team and the people they work with.

    - by David V. Corbin
    Welcome to 2011, it is already shaping up to be a very exciting year. The title of the post is not about charitable giving, although that is also a great topic. Application Lifecycle Management and the Systems that support the environment is, and 2011 will be a year where I expect many teams to invest heavily in this area. For those not familiar with ALM, it can be simplified down to "A comprehensive view of all of the iteas, requirements, activities and artifacts that impact an application over the course of its lifecycle, from concept until decommissioning". Obviously, this encompases a large number of different areas even for relatively small and medium sized projects. In recent years, many teams have adapted methodoligies which address individual aspects of this; but the majority of this adoption has resulted in "islands of improvement" rather than the desired comprehensive outcome...Until now! Last year Microsoft released Team Foundation Server 2010 along with Visual Studio 2010 Ultimate Edition, and with these two in combination the situation has drastically changed. At last there is a single environment that is capable of handling all aspects of ALM, and is also capable of dealing with migration and integration with existing systems to make the transition to a single solution much easier. Thse possibilities (and practicalities) are nothing short of amazing, Architecture thru Testing integration? YES. Being able to correlate specific requirement items (and their history) to actual code (and code history)? YES. Identification of which tests will be potentially impacted by a given code change? YES. Resiliant Automated Testing of User Interfaces? YES. Automatic Deployment Management? YES. Integraton Level testing as part of (designated) Builds? YES. I could easily double or triple the above list, but these items should be enough to get you thinking about the "pain points" your team and organization currently face and the fact that there IS a way to relieve the pain. Over the course of the year, I am hoping to bring together some of the "best of breed" information, along with hosting (and participating in) discussions with various experts in the field. There are already a number of groups (including many on LinkedIn) that have an ALM focus, and I encourage everyone out to check them out. I will be posting a list of the ones I find most helpful in the not too distant future. As I said at the beginning, 2011 is shaping up to be a very interesting (and productive) year. Why wait to start investigating and adopting ALM? ps: For those interested in becoming an "Alms Giver" in the charitable sense, I highly recommend checking out GiveCamp. A group of developers, designers and others get together to create a solution for a charity in just under 48 hours. I will be attending the GiveCamp in New York City on Jan 14-16, more information is available at nycgivecamp.org/

    Read the article

  • How is programming affected by spatial aptitude?

    - by natli
    The longer I work on a project, the less clear it becomes. It's like I cannot seperate various classes/objects anymore in my head. Everything starts mixing up, and it's extremely hard to take it all apart again. I start putting functions in classes where they really don't belong, and make silly mistakes such as writing code that I later find was 100% obsolete; things are no longer clearly mappable in my head. It isn't until I take a step back for several hours (or days somtimes!) that I can actually see what's going on again, and be productive. I usually try to fight through this, I am so passionate about coding that I wouldn't for the life of me know what else I could be doing. This is when stuff can get really weird, I get so up in my head that I sort of lose touch with reality (to some extent) in that various actions, such as pouring a glass of water, no longer happen on a concious level. It happens on auto pilot, during which pretty much all of my concious concentration (is that even a thing?) is devoted to borderline pointless problem solving (trying to seperate elements of code). It feels like a losing battle. So I took an IQ test a while ago (Wechsler Adult Intelligence Scale I believe it was) and it turned out my Spatial Aptitude was quite low. I still got a decent score, just above average, so I won't have to poke things with a stick for a living, but I am a little worried that this is such a handicap when writing/engineering computer programs that I won't ever be able to do it seriously or professionally. I am very much interested in what other people think of this.. could a low spatial aptitude be the cause of the above described problems? Maybe I should be looking more along the lines of ADD or something similar, because I did get diagnosed with ADD at the age of 17 (5 years ago) but the medicine I received didn't seem to affect me that much so I never took it all that serious. Sorry if I got a little off topic there, I know this is not a mental help board, the question should be clear; How is programming affected by spatial aptitude? As far as I know people are born with low/med/high spatial aptitude, so I think it's interesting to find out if the more fortunate are better programmers by birth right.

    Read the article

  • Newsletter sent with drupal goes to Spam Folder [closed]

    - by HerrSerker
    Possible Duplicate: How could I prevent my mail from being recognized as spam? I'm sending a newsletter with drupals simplenews module The website is hosted on an 1und1 server in germany (as seen in in header domains online.de and kundenserver.de) When I send it, it goes to Spam folder in Yahoo & GMail Mailbox, but not in Spam Folder in web.de, hotmail and GMX Mailboxes Here is, what I have in the Mail Header (for yahoo in this example) Received: from 12.345.678.90 (EHLO sXXXXXXXXX.online.de) (12.345.678.90) by mtaXXX.mail.kks.yahoo.co.jp with SMTP; Fri, 15 Jun 2012 18:45:24 +0900 Received: from [127.0.0.1] (helo=infongdXXXXX.rtr.kundenserver.de) by sXXXXXXXXX.online.de with esmtp (Exim 4.72) (envelope-from <[email protected]>) id 1SfT5k-00068r-Q8 for [email protected]; Fri, 15 Jun 2012 11:45:20 +0200 Received: from 83.136.130.41 (IP may be forged by CGI script) by infongdXXXXX.rtr.kundenserver.de with HTTP id 0Z04SW-1SQTKp3LPr-00YxYk; Fri, 15 Jun 2012 11:45:20 +0200 From: SENDER <[email protected]> To: "[email protected]" <[email protected]> Date: Fri, 15 Jun 2012 11:45:20 +0200 Subject: This is the subject of the newsletter Thread-Topic: This is the subject of the newsletter Thread-Index: Ac1K3nT42juzo7uCSkq5dTlby1ZvpQ== List-Unsubscribe: <http://www.example.com/newsletter/confirm/remove/XXXXXXXXX> X-MS-Has-Attach: X-Auto-Response-Suppress: All X-MS-TNEF-Correlator: x-originating-ip: [12.345.678.90] authentication-results: mtaXXX.mail.kks.yahoo.co.jp from=example.com; domainkeys=neutral (no sig); dkim=neutral (no sig) [email protected] errors-to: "SENDER" <[email protected]> received-spf: none (sXXXXXXXXX.online.de: domain of [email protected] does not designate permitted sender hosts) x-apparently-to: [email protected] via 123.45.67.890; Fri, 15 Jun 2012 18:45:25 +0900 x-sender-info: <[email protected]> content-length: 13762 Content-Type: multipart/alternative; boundary="_000_7471797868716571796675707173696675806577726778666766687_" MIME-Version: 1.0 I cannot see any direct spam filter message in this. But I'm kind of stunned by the Received: from 83.136.130.41 (IP may be forged by CGI script) part. After I searched a bit, it seems, that this is a special 'feature' of 1und1 Mail servers. Here are my questions: Is it possible that, if I get rid of the 'Ip maybe forged' part, that the Mail is not regarded as spam anymore? If so, Does anyone know, how I can get rid of it in drupal?

    Read the article

  • TechEd 2012: Recap

    - by Tim Murphy
    TechEd this week was a great experience and I wanted to wrap it up with a summary post. First let me say a thank you to John and Jeff from GWB for supplying power, connectivity and a place to work in between sessions.  The blogging hub was a great experience in itself.  Getting to talk with other bloggers and other conference goers turned into a series of interesting conversations.  And where else can you almost end up in the day 1 highlights video? The sessions at TechEd were a mixed bag of value.  The Keynotes rocked, both figuratively and literally and most of the sessions that I want to were a good experience and had gems of information to take away.  There were a few exceptions though.  A couple of the sessions turned out to be sales jobs.  Nothing turns me off more than that (there will be some really honest comments on those surveys). TechEd re-enforced for me that much of the value is not in the sessions, but in the networking opportunities. I got to talk with several Microsoft team members and MVPs as well as some of the vendor representative for companies like Inrule and ComponentOne. Also got to expand both my local and extended community with discussions at meal times and waiting for sessions to start. I think this is one of the benefits that a lot of people don’t take advantage of in these conferences that should be a bigger part of the advertising. Exposure to a wide variety of topics, many of which I had not been able to make time for up to this point was envigorating.  The list of topic includes: Office 365, Windows Server 2012, Windows 8, Metro, Azure.  I can’t wait to get back to work and dig into these subjects in more depth. The one complaint that I had and heard from other attendees was that there weren’t enough sessions that were actually about development.  I realize that TechEd started as an event for IT Pros, but there needs to be more value for the Devs.  It all went by too fast and it will take a couple more days to digest the material, but the batteries are and I’m ready to leverage what I’ve learned.  Hopefully we will do it again next year. del.icio.us Tags: TechEd,TechEd 2012

    Read the article

  • Compiling for T4

    - by Darryl Gove
    I've recently had quite a few queries about compiling for T4 based systems. So it's probably a good time to review what I consider to be the best practices. Always use the latest compiler. Being in the compiler team, this is bound to be something I'd recommend But the serious points are that (a) Every release the tools get better and better, so you are going to be much more effective using the latest release (b) Every release we improve the generated code, so you will see things get better (c) Old releases cannot know about new hardware. Always use optimisation. You should use at least -O to get some amount of optimisation. -xO4 is typically even better as this will add within-file inlining. Always generate debug information, using -g. This allows the tools to attribute information to lines of source. This is particularly important when profiling an application. The default target of -xtarget=generic is often sufficient. This setting is designed to produce a binary that runs well across all supported platforms. If the binary is going to be deployed on only a subset of architectures, then it is possible to produce a binary that only uses the instructions supported on these architectures, which may lead to some performance gains. I've previously discussed which chips support which architectures, and I'd recommend that you take a look at the chart that goes with the discussion. Crossfile optimisation (-xipo) can be very useful - particularly when the hot source code is distributed across multiple source files. If you're allowed to have something as geeky as favourite compiler optimisations, then this is mine! Profile feedback (-xprofile=[collect: | use:]) will help the compiler make the best code layout decisions, and is particularly effective with crossfile optimisations. But what makes this optimisation really useful is that codes that are dominated by branch instructions don't typically improve much with "traditional" compiler optimisation, but often do respond well to being built with profile feedback. The macro flag -fast aims to provide a one-stop "give me a fast application" flag. This usually gives a best performing binary, but with a few caveats. It assumes the build platform is also the deployment platform, it enables floating point optimisations, and it makes some relatively weak assumptions about pointer aliasing. It's worth investigating. SPARC64 processor, T3, and T4 implement floating point multiply accumulate instructions. These can substantially improve floating point performance. To generate them the compiler needs the flag -fma=fused and also needs an architecture that supports the instruction (at least -xarch=sparcfmaf). The most critical advise is that anyone doing performance work should profile their application. I cannot overstate how important it is to look at where the time is going in order to determine what can be done to improve it. I also presented at Oracle OpenWorld on this topic, so it might be helpful to review those slides.

    Read the article

  • Shelving &ndash; What is it &ndash; and more importantly, can it help me?

    - by Chris Skardon
    Since we shifted to TFS we’ve had the ability to perform what is known as ‘shelving’. Shelving (whilst not a wholly new topic in the world of SCC) is new to us, and didn’t exist in our previous SCC solution – SVN. Soo… what is it? What? Shelving is a way to check-in but not check-in your code. By shelving you submit a copy of your ‘pending changes’ to the SCC server, (which maintains a list of the shelvesets) and once that is done you can either continue working, or undo your changes, safe in the knowledge that a backup copy exists on the server. You can unshelve your code at any time and get back to the state you were when you shelved. Yer, that is great but why not just check it in?? Shelvesets don’t have to build. The shelveset you put in there could be entirely broken, or it might solve every bug in the system – shelves aren’t continuously integrated so you can shelve anything. Hmmmm… What else? Shelving allows us to do some pretty cool stuff that beforehand was quite frankly a pain. For instance – Gated Check-ins are implemented via the shelving mechanism, when code is checked-in, what you’re actually doing is shelving it, the Build Controller will build the shelveset with the original code and if it succeeds, the code will be committed, if it fails – well – it’s only you that has to fix the code :) Other nice features are things like the ability to share code you are working on… For example, if I was having trouble with a particular piece of code, I could shelve it, and then you (yes you) could then get that shelveset and check out the problem for yourself, and if you fix it?? Well – you could check-it in! Nice, but day-to-day shizzle? Let’s say you’ve been working on your project and your project manager comes over to you and says: “Hey, errr, bad times, there is an urgent bug we need you to fix, it needs to go out now!” (also for this to play out – we’ll need to assume you’re currently working in the 'release’ branch for another bug fix (maybe))… You could undo all your current changes (obviously you’ll probably backup your code using zip or something I imagine) fix the bug, then re-copy your backup over the top, or you could shelve and unshelve. Perhaps some other uses will awaken the shelver in you… :) Before each checkin – if you shelve, you no longer need to worry (if indeed you do) about resolving conflicts and mysteriously losing your code… Going home at night? Not checking in straight away? Why not shelve, this way – should the worst come to the worst and your local pc gives up, you can just get the shelveset onto another machine and be up and running in literally seconds minutes…

    Read the article

  • 2D camera perspective projection from 3D coordinates -- HOW?

    - by Jack
    I am developing a camera for a 2D game with a top-down view that has depth. It's almost a 3D camera. Basically, every object has a Z even though it is in 2D, and similarly to parallax layers their position, scale and rotation speed vary based on their Z. I guess this would be a perspective projection. But I am having trouble converting the objects' 3D coordinates into the 2D space of the screen so that everything has correct perspective and scale. I never learned matrices though I did dig the topic a bit today. I tried without using matrices thanks to this article but every attempt gave awkward results. I'm using ActionScript 3 and Flash 11+ (Starling), where the screen coordinates work like this: Left-handed coordinates system illustration I can explain further what I did if you want to help me sort out what's wrong, or you can directly tell me how you would do it properly. In case you prefer the former, read on. These are images showing the formulas I used: upload.wikimedia.org/math/1/c/8/1c89722619b756d05adb4ea38ee6f62b.png upload.wikimedia.org/math/d/4/0/d4069770c68cb8f1aa4b5cfc57e81bc3.png (Sorry new users can't post images, but both are from the wikipedia article linked above, section "Perspective projection". That's where you'll find what all variables mean, too) The long formula is greatly simplified because I believe a normal top-down 2D camera has no X/Y/Z rotation values (correct ?). Then it becomes d = a - c. Still, I can't get it to work. Maybe you could explain what numbers I should put in a(xyz), c(xyz), theta(xyz), and particularly, e(xyz) ? I don't quite get how e is different than c in my case. c.z is also an issue to me. If the Z of the camera's target object is 0, should the camera's Z be something like -600 ? ( = focal length of 600) Whatever I do, it's wrong. I only got it to work when I used arbitrary calculations that "looked" right, like most cameras with parallax layers seem to do, but that's fake! ;) If I want objects to travel between Z layers I might as well do it right. :) Thanks a lot for your help!

    Read the article

  • The Other Side of XBRL

    - by john.orourke(at)oracle.com
    With the United States SEC's mandate for XBRL filings entering its third year, and impacting over 7000 additional companies in 2011, there's a lot of buzz in the industry about how companies should address the new reporting requirements.  Should they outsource the XBRL tagging process to a third party publisher, handle the process in-house with a bolt-on XBRL tool, or should they integrate XBRL tagging with the financial close and reporting process?  Oracle is recommending the latter approach, in fact  here's a link to a recent webcast that I did with CFO.com on this topic: http://www.cfo.com/webcasts/index.cfm/l_eventarchive/14548560 But production of XBRL-based filings is only half of the story. The other half is consumption of XBRL by regulators, academics, financial analysts and investors.  As I mentioned in my December article on the XBRL US conference, the feedback from these groups is that they are not really leveraging XBRL for analysis of companies due to a lack of tools and historic XBRL-based data on public companies.   The good news here is that the historic data problem is getting better as large, accelerated filers enter their third year of XBRL filings.  And the situation is getting better on the reporting and analysis tools side of the equation as well - and Oracle is leading the way. In early January, Oracle released the Oracle XBRL Extension for Oracle Database 11g.  This is a "no cost option" on top of the latest Oracle Database 11.2.0.2.0 release. With this added functionality organizations will have the ability to create one or more back-end XBRL repositories based on Oracle Database, which provide XBRL storage and query-ability with a set of XBRL-specific services.  The XBRL Extension to Oracle XML DB integrates easily with Oracle Business Intelligence Suite Enterprise Edition (OBIEE) for analytics and with interactive development environments (IDEs) and design tools for creating and editing XBRL taxonomies. The Oracle XBRL Extension to Oracle Database 11g should be attractive to regulators, stock exchanges, universities and other organizations that need to collect, analyze and disseminate XBRL-based filings.  It should also be attractive to organizations that produce XBRL filings, and need a way to store and compare their own XBRL-based financial filings to those of their peers and competitors. If you would like more information, here's a link to a web page on the Oracle Technology Network with the details about Oracle XBRL Extension for Oracle Database 11g, including data sheet, white paper, presentation, demos and other information: http://www.oracle.com/technetwork/database/features/xmldb/index-087631.html

    Read the article

  • Retail in New York - a walk down 5th Avenue

    - by sarah.taylor(at)oracle.com
    It's the week of the NRF Big Show and all eyes in the retail industry are on New York. The Big Apple is famous for Big Retail -with a proliferation of incredibly iconic stores. The environment is exciting and familiar even to people visiting this small island for the first time. Most of us have travelled down Fifth Avenue watching movies and TV even if we have never set foot on American soil. I find it one of the most exciting retail cities in the world and I am thrilled this year to be here with so many of Oracle's International retail customers who are joining us for the Retail Exchange. The Oracle program brings retailers from all over the planet together to share ideas and be inspired by New York retail and the NRF event. The show celebrates its 100th year in 2011 and New York itself has been recognized globally as the capital of innovative retail for just as long.  Fifth Avenue is where many global brands have placed their flagship stores, and businesses are in constant competition to set themselves apart from their competitors - both in the store and from the street.  These flag ship retail destinations present what today's customers are finding most exciting and delightful about retail. For the tourist market, they may only visit these stores once, but the impression that a trip to a flagship store leaves with a customer can last a lifetime.  One of the stores that is currently turning heads on Fifth Avenue is Hollister, sister brand to Abercrombie and Fitch, which has filled its shop front with a massive live video (and audio) feed of surfers on the beach in California.  To complete the effect, they also have troughs of water in front of the video screens to bring the sea to the street.  And this isn't the only kind of surfing that retailers are considering today and multi-channel retail is a hot topic that all of the retailers joining the Retail Exchange are considering.   The rest of the world looks to the brands along Fifth Avenue for inspiration - how they take advantage of new opportunities, how they set themselves apart from their competitors and how they keep their products fresh and desirable. With these inspiring pioneers in New York, it's little wonder that NRF's Big Show is so popular, and that New York is viewed as one of the retail capitals of the world. It is a pleasure to be here with so many of the world's greatest international retailers.

    Read the article

  • Is there an API for determining congressional districts?

    - by ardavis
    I'm looking to determine the congressional district based on an address my user is providing. This will avoid having the user to look it up themselves. Does an API of this sort exist? Note Through my attempts to find one, I've only come across these: http://www.govtrack.us/developers/api (not sure how to submit an an address or zip code however) The following resources are available in the API ...Bills and resolutions in the U.S. Congress since 1973 (the 93rd Congress). ...A (bill, person) pair indicating cosponsorship, with join and withdrawn dates. ...Members of Congress and U.S. Presidents since the founding of the nation. ...Terms held in office by Members of Congress and U.S. Presidents. Each term corresponds with an election, meaning each term in the House covers two years (one 'Congress'), as President four years, and in the Senate six years (three 'Congresses'). ...Roll call votes in the U.S. Congress since 1789. How people voted is accessed through the Vote_voter API. ...How people voted on roll call votes in the U.S. Congress since 1789. See the Vote API. Filter on the vote field to get the results of a particular vote... http://www.opencongress.org/api (seems to be a way to find congress information, but not districts) This API provides programmers with structured access to all the data on OpenCongress, everything from official bill info to news and blog coverage to user-generated votes on bills and much more... This API defaults to returning XML. All queries can also return JSON... https://groups.google.com/forum/?fromgroups=#!topic/opendems-discuss/CeKyi_aANaE (similar question, no resolution) I've been looking over Open Dems, and seeing what's exposed at this point and what isn't. I work with Democrats Abroad, and am interested in using stuff from the lab for their sites. I quickly looked over the Precinct API, which does both more and less than what I'd need. An ideal resource would be any way of translating addresses into CD at the very least (getting state district data would be good as well), since that would make it easier for DA's membership to make a difference in races like last month's NY26 race... Update I'm looking at the source for the govtrack.us website and the 'doGeoCode' function may be useful. view-source:http://www.govtrack.us/congress/members If no one has any suggestions, I will try to go off of what they are doing.

    Read the article

< Previous Page | 98 99 100 101 102 103 104 105 106 107 108 109  | Next Page >