Search Results

Search found 1508 results on 61 pages for 'deep'.

Page 14/61 | < Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >

  • Oracle at The Forrester Customer Intelligence and Marketing Leadership Forums

    - by Christie Flanagan
    The Forrester Customer Intelligence Forum and the Forrester Marketing Leadership Forums will soon be here.  This year’s events will be co-located on April 18-19 at the J.W. Marriott at the L.A. Live entertainment complex in downtown Los Angeles.  Last year’s Marketing Forum was quite memorable for me.  You see, while Forrester analysts and business marketers were busy mingling over at the Marriott, another marketing powerhouse was taking up residence a few feet away at The Staples Center.  That’s right folks. Lada Gaga was coming to town.  And, as I came to learn, it made perfect sense for Lady Gaga and her legions of fans to be sharing a small patch of downtown L.A. with marketing leaders from all over the world.  After all, whether you like Lady Gaga or not, what pop star in recent memory has done more to build herself into a brand and to create an engaging, social and interactive customer experience for her Little Monsters?  While Lady Gaga won’t be back in town for this year’s Forrester events, there are still plenty of compelling reasons to make the trip out to Los Angeles.   The theme for The Forrester Customer Intelligence and Marketing Leadership Forums this year is “From Cool To Critical: Creating Engagement In The Age Of The Customer” and will tackle the important questions about how marketers can survive and thrive in the age of the empowered customer: •    How can you assess consumer uptake of new innovations?•    How do you build deep customer knowledge to drive competitive advantage?•    How do you drive deep, personalized customer engagement?•    What is more valuable — eyeballs or engagement?•    How do business customers engage in new media types?•    How can you tie social data to corporate data?•    Who should lead the movement to customer obsession?•    How should you shift your planning and measurement approaches to accommodate more data and a higher signal-to-noise ratio?•    What role does technology play in customizing and synchronizing marketing efforts across channels?As a platinum sponsor of the event, there will be a numbers of ways to interact with Oracle while you’re attending the Forums.  Here are some of the highlights:Oracle Speaking SessionThursday, April 19, 9:15am – 9:55amMaximize Customer Engagement and Retention with Integrated Marketing & LoyaltyMelissa Boxer, Vice President, Oracle CRM Marketing & LoyaltyCustomers expect to interact with your company, brand and products in more ways than ever before.   New devices and channels, such as mobile, social and web, are creating radical shifts in the customer buying process and the ways your company can reach and communicate with existing and potential customers. While Marketing's objectives (attract, convert, retain) remain fundamentally the same, your approach and tools must adapt quickly to succeed in this more complex, cross-channel world. Hear how leading brands are using Oracle's integrated marketing and loyalty solutions to maximize customer engagement and retention through better planning, execution, and measurement of synchronized cross-channel marketing initiatives.Solution ShowcaseWednesday, April 1810:20am – 11:50am 12:30pm – 1:30pm2:55pm – 3:40pmThursday, April 199:55am – 10:40am12:00pm – 1:00pmSolution Showcase & Networking ReceptionWednesday, April 185:10pm – 6:20pmBe sure to follow the #webcenter hashtag for updates on these events.  And for a more considered perspective on what Lady Gaga can teach businesses about branding and customer experience, check out Denise Lee Yohn’s post, Lessons from Lady Gaga from the Brand as Business Bites blog.

    Read the article

  • Today at Oracle OpenWorld 2012

    - by Scott McNeil
    We have another full day of great Oracle OpenWorld keynotes, sessions, demos and customer presentations in the Seen and Be Heard threater. Here's a quick run down of what's happening today with Oracle Enterprise Manager 12c: Download the Oracle Enterprise Manager 12c OpenWorld schedule (PDF) Oracle Enterprise Manager Cloud Control 12c (and Private Cloud) General Session Tues 2 Oct, 2012 Time Title Location 11:45 AM - 12:45 PM General Session: Using Oracle Enterprise Manager to Manage Your Own Private Cloud Moscone South - 103* 1:15 PM - 2:15 PM General Session: Breakthrough Efficiency in Private Cloud Infrastructure Moscone West - 3014 Conference Session Tues 2 Oct, 2012 Time Title Location 10:15 AM - 11:15 AM Oracle Exadata/Oracle Enterprise Manager 12c: Journey into Oracle Database Cloud Moscone West - 3018 10:15 AM - 11:15 AM Bulletproof Your Application Upgrades with Secure Data Masking and Subsetting Moscone West - 3020 10:15 AM - 11:15 AM Oracle Enterprise Manager 12c: Architecture Deep Dive, Tips, and Techniques Moscone South - 303 11:45 AM - 12:45 PM RDBMS Forensics: Troubleshooting with Active Session History Moscone West - 3018 11:45 AM - 12:45 PM Building and Operationalizing Your Data Center Environment with Oracle Exalogic Moscone South - 309 11:45 AM - 12:45 PM Securely Building a National Electronic Health Record: Singapore Case Study Westin San Francisco - Concordia 1:15 PM - 2:15 PM Managing Heterogeneous Environments with Oracle Enterprise Manager Moscone West - 3018 1:15 PM - 2:15 PM Complete Oracle WebLogic Server Management with Oracle Enterprise Manager 12c Moscone South - 309 1:15 PM - 2:15 PM Database Lifecycle Management with Oracle Enterprise Manager 12c Moscone West - 3020 1:15 PM - 2:15 PM Best Practices, Key Features, Tips, Techniques for Oracle Enterprise Manager 12c Upgrade Moscone South - 307 1:15 PM - 2:15 PM Enterprise Cloud with CSC’s Foundation Services for Oracle and Oracle Enterprise Manager 12c Moscone South - 236 5:00 PM - 6:00 PM Deep Dive 3-D on Oracle Exadata Management: From Discovery to Deployment to Diagnostics Moscone West - 3018 5:00 PM - 6:00 PM Everything You Need to Know About Monitoring and Troubleshooting Oracle GoldenGate Moscone West - 3005 5:00 PM - 6:00 PM Oracle Enterprise Manager 12c: The Nerve Center of Oracle Cloud Moscone West - 3020 5:00 PM - 6:00 PM Advanced Management of Oracle E-Business Suite with Oracle Enterprise Manager Moscone West - 2016 5:00 PM - 6:00 PM Oracle Enterprise Manager 12c Cloud Control Performance Pages: Falling in Love Again Moscone West - 3014 Hands-on Labs Tues 2 Oct, 2012 Time Title Location 10:15 AM - 12:45 PM Managing the Cloud with Oracle Enterprise Manager 12c Marriott Marquis - Salon 5/6 1:15 PM - 2:15 PM Database Performance Tuning Hands-on Lab Marriott Marquis - Salon 5/6 Scene and Be Heard Theater Session Tues 2 Oct, 2012 Time Title Location 10:30 AM - 10:50 AM Start Small, Grow Big: Hands-On Oracle Private Cloud—A Step-by-Step Guide Moscone South Exhibition Hall - Booth 2407 12:30 PM - 12:50 PM Blue Medora’s Oracle Enterprise Manager Plug-in for VMware vSphere Monitoring Moscone South Exhibition Hall - Booth 2407 Demos Demo Location Application and Infrastructure Testing Moscone West - W-092 Automatic Application and SQL Tuning Moscone South, Left - S-042 Automatic Fault Diagnostics Moscone South, Left - S-036 Automatic Performance Diagnostics Moscone South, Left - S-033 Complete Care for Oracle Using My Oracle Support Moscone South, Left - S-031 Complete Cloud Lifecycle Management Moscone North, Upper Lobby - N-019 Complete Database Lifecycle Management Moscone South, Left - S-030 Comprehensive Infrastructure as a Service via Oracle Enterprise Manager Moscone South, Left - S-045 Data Masking and Data Subsetting Moscone South, Left - S-034 Database Testing with Oracle Real Application Testing Moscone South, Left - S-041 Identity Management Monitoring with Oracle Enterprise Manager Moscone South, Right - S-212 Mission-Critical, SPARC-Powered Infrastructure as a Service Moscone South, Center - S-157 Oracle E-Business Suite, Siebel, JD Edwards, and PeopleSoft Management Moscone West - W-084 Oracle Enterprise Manager Cloud Control 12c Overview Moscone South, Left - S-039 Oracle Enterprise Manager: Complete Data Center Management Moscone South, Left - S-040 Oracle Exadata Management Moscone South, Center - Oracle Exalogic Management Moscone South, Center - Oracle Fusion Applications Management Moscone West - W-018 Oracle Real User Experience Insight Moscone South, Right - S-226 Oracle WebLogic Server Management and Java Diagnostics Moscone South, Right - S-206 Platform as a Service Using Oracle Enterprise Manager Moscone North, Upper Lobby - N-020 SOA Management Moscone South, Right - S-225 Self-Service Application Testing on Private and Public Clouds Moscone West - W-110 Oracle OpenWorld Music Festival New this year is Oracle’s first annual Oracle OpenWorld Musical Festival, featuring some of today's breakthrough musicians from around the country and the world. It's five nights of back-to-back performances in the heart of San Francisco—free to registered attendees. See the lineup Not Heading to OpenWorld—Watch it Live! Stay Connected: Twitter | Facebook | YouTube | Linkedin | Newsletter Download the Oracle Enterprise Manager Cloud Control12c Mobile app

    Read the article

  • 10 Best Programming Podcast 2010 Edition

    - by mbcrump
    This list is in no particular order. Just the 10 best programming podcast that I have found so far. Stack Overflow Podcast -  Jeff Atwood (of codinghorror.com) and Joel Spolsky (of joelonsoftware.com) discuss the development of their new programming community, StackOverflow.com. [This Podcast hasn’t been updated in a while, but its always great to hear more from Jeff Atwood] Hanselminutes - Hanselminutes is a weekly audio talk show with noted web developer and technologist Scott Hanselman and hosted by Carl Franklin. Scott discusses utilities and tools, gives practical how-to advice, and discusses ASP.NET or Windows issues and workarounds. [This Podcast has recently started talking about random topics like diabetes, plane travel and geek relationship tips.  I am not sure if Scott is trying to move to a more mainstream audience or not] Herding Code - A weekly discussion featuring K. Scott Allen (odetocode.com), Kevin Dente, Scott Koon (lazycoder.com), and Jon Galloway. [Great all all-around podcast that I would recommend to all] Deep Fried Bytes - Deep Fried Bytes is an audio talk show with a Southern flavor hosted by technologists and developers Keith Elder and Chris Woodruff. The show discusses a wide range of topics including application development, operating systems and technology in general. Anything is fair game if it plugs into the wall or takes a battery. [This is one that just keeps getting better] Dot Net Rocks - .NET Rocks! is an Internet Audio Talk Show for Microsoft .NET Developers. [One of the first and usually very high quality content] Connected Show - Connected Show Podcast! A podcast covering new Microsoft technology for the developer community. The show is hosted by Dmitry Lyalin and Peter Laudati. [This and Polymorphic are one of my favorite podcast – Dmitry is a great host and would recommend this to all] Polymorphic Podcast - Object oriented development, architecture and best practices in .NET [Craig is a ASP.NET MVP and a great presenter. His podcast is great and it could only be better if he recorded it more often] ASP.NET Podcast - Wallace B. (Wally) McClure presents interviews and short technical talks on .NET Technologies. [Has great information on ASP.NET of course as well as iPhone Dev] Ruby on Rails Podcast - News and interviews about the Ruby language and the Rails website framework. [Even though I am not a Ruby programmer, I’ve found this podcast very interesting] Software Engineering Radio - Software Engineering Radio is a podcast targeted at the professional software developer. The goal is to be a lasting educational resource, not a newscast. Every ten days, a new episode is published that covers all topics software engineering. Episodes are either tutorials on a specific topic, or an interview with a well-known character from the software engineering world. All SE Radio episodes are original content ? we do not record conferences or talks given in other venues. Each episode comprises two speakers to ensure a lively listening experience. SE Radio is an independent and non-commercial organization. [Another excellent podcast – I would recommend any programmer add this to his/her drive home] If I have missed something, please feel free to email me and it might make the 2011 list. =)

    Read the article

  • What Counts For A DBA: ESP

    - by Louis Davidson
    Now I don’t want to get religious here, and I’m not going to, but what I’m going to describe in this ‘What Counts for a DBA’ installment sometimes feels like magic. Often  I will spend hours thinking about the solution to a design issue or coding problem, working diligently to try to come up with a solution and then finally just give up with the feeling that I’m not even qualified to be a data entry clerk, much less a data architect.  At this point I often take a walk (or sometimes a nap), and then it hits me. I realize that I have the answer just sitting in my brain, ready to implement.  This phenomenon is not limited to walks either; it can happen almost any time after I stop my obsession about a problem. I call this phenomena ESP (or Extra-Sensory Programming.)  Another term for this could be ‘sleeping on it’, and while the idiom tends to mean to let time pass to actively think about a problem, sleeping on a problem also lets you relax and let your brain do the work. I first noticed this back in my college days when I would play video games for hours on end. We would get stuck deep in some dungeon unable to find a way out, playing for days on end until we were beaten down tired. Once we gave up and walked away, the solution would usually be there waiting for one of us before we came back to play the next day.  Sometimes it would be in the form of a dream, and sometimes it would just be that the problem was now easy to solve when we started to play again.  While it worked great for video games, it never occurred when I studied English Literature for hours on end, or even when I worked for the same sort of frustrating hours attempting to solve a homework problem in Calculus.  I believe that the difference was that I was passionate about the video game, and certainly far less so about homework where people used the word “thou” instead of “you” or x to represent a number. This phenomenon occurs somewhat more often in my current work as a professional data programmer, because I am very passionate about SQL and love those aspects of my career choice.  Every day that I get to draw a new data model to solve a customer issue, or write a complex SELECT statement to ferret out the answer to a complex data question, is a great day. I hope it is the same for any reader of this blog.  But, unfortunately, while the day on a whole is great, a heck of a lot of noise is generated in work life. There are the typical project deadlines, along with the requisite project manager sitting on your shoulders shouting slogans to try to make you to go faster: Add in office politics, and the occasional family issues that permeate the mind, and you lose the ability to think deeply about any problem, not to mention occasionally forgetting your own name.  These office realities coupled with a difficult SQL problem staring at you from your widescreen monitor will slowly suck the life force out of your body, making it seem impossible to solve the problem This is when the walk starts; or a nap. Maybe you hide from the madness under your desk like George Costanza hides from Steinbrenner on Seinfeld.  Forget about the problem. Free your mind from the insanity of the problem and your surroundings. Then let your training and education deep in your brain take over and see if it will passively do the rest for you. If you don’t end up with a solution, the worst case scenario is that you have a bit of exercise or rest, and you won’t have heard the phrase “better is the enemy of good enough” even once…which certainly will do your brain some good. Once you stop expecting whipping your brain for information, inspiration may just strike and instead of a humdrum solution you find a solution you hadn’t even considered, almost magically. So, my beloved manager, next time you have an urgent deadline and you come across me taking a nap, creep away quietly because I’m working, doing some extra-sensory programming.

    Read the article

  • Make your TSQL easier to read during a presentation

    - by Jonathan Allen
    SQL Server Management Studio 2012 has some neat settings that you can use to help your presentations at a SQL event better for the attendees if you are willing to spend a few minutes making some settings changes. Historically, I have been reluctant to make changes to my SSMS settings as it is such a tedious process and it’s not 100% clear that what you think you are changing is actually what gets changed. With SSMS 2012 this has become a lot easier and a lot less risky. In any session that involves TSQL there is a trade off between the speaker having all the code on screen and the attendees being able to read any of what is on screen. You (the speaker) might be able to read this when you are working on the code but plenty of your audience wont be able to make head or tail of it. SSMS 2012 has a zoom facility that can help: but don’t go nuts … Having the font too big means you will be scrolling a lot and the code will again be rendered unreadable. There is more though but you need to take a deep breath and open the Tools menu and delve into the SSMS options. In previous versions of SSMS this is a deep, dark and scary place where changing values can be obscure and sometimes catastrophic to the UI when you get back to the code editor. First things first, we set out as a good DBA and save our current (and presumably acceptable) SSMS configuration. From the import and Export Settings you can set up a file to hold all of the settings that you currently have. The wizard will open and ask you to pick an option. This time around choose to export settings. hit next and next again and then name your settings profile in the final step of the wizard and then click Finish. Once this is done then you can change whatever you like and always get back to this configuration in a couple of clicks. So what can you change to make for a good experience? Well there are plenty of things that can be altered but don’t go too mad and change too many things without taking a look at the results for every item on the list above you can change font, size, weight, colour, background colour etc. etc. but consider what you are trying to achieve and take it slowly. I have seen presenters with their settings set to have a yellow highlight and black font rather than the default pale blue background and slightly darker font so to achieve that select Text Editor and then select “Selected Text” in the Display Items listbox. As you change things the Sample area give you an idea of what effect you are going to have. Black and yellow is the colour combination with the highest contrast – that’s why bees and wasps# are that colour. What next? how about increasing the default font for your demo scripts? This means that any script you open and any new ones that you start will take on this font. No more zooming (or forgetting to) in the middle of sessions. now don’t forget to save this profile – follow the same steps as above but give the profile a different name, something like PresentationBigFontHighContrast might be appropriate. Once you are done making changes, export the settings once more and then go into the Import Export wizard and import settings from the first profile you created. Everything will be back to normal. Now making changes to suit your environment can be done very easily and with confidence. * – and warning tape and safety signs and so forth – Health and Safety officers simply copy nature!

    Read the article

  • Consumer Oriented Search In Oracle Endeca Information Discovery – Part 1

    - by Bob Zurek
    Information Discovery, a core capability of Oracle Endeca Information Discovery, enables business users to rapidly search, discover and navigate through a wide variety of big data including structured, unstructured and semi-structured data. One of the key capabilities, among many, that differentiate our solution from others in the Information Discovery market is our deep support for search across this growing amount of varied big data. Our method and approach is very different than classic simple keyword search that is found in may information discovery solutions. In this first part of a series on the topic of search, I will walk you through many of the key capabilities that go beyond the simple search box that you might experience in products where search was clearly an afterthought or attempt to catch up to our core capabilities in this area. Lets explore. The core data management solution of Oracle Endeca Information Discovery is the Endeca Server, a hybrid search-analytical database that his highly scalable and column-oriented in nature. We will talk in more technical detail about the capabilities of the Endeca Server in future blog posts as this post is intended to give you a feel for the deep search capabilities that are an integral part of the Endeca Server. The Endeca Server provides best-of-breed search features aw well as a new class of features that are the first to be designed around the requirement to bridge structured, semi-structured and unstructured big data. Some of the key features of search include type a heads, automatic alphanumeric spell corrections, positional search, Booleans, wildcarding, natural language, and category search and query classification dialogs. This is just a subset of the advanced search capabilities found in Oracle Endeca Information Discovery. Search is an important feature that makes it possible for business users to explore on the diverse data sets the Endeca Server can hold at any one time. The search capabilities in the Endeca server differ from other Information Discovery products with simple “search boxes” in the following ways: The Endeca Server Supports Exploratory Search.  Enterprise data frequently requires the user to explore content through an ad hoc dialog, with guidance that helps them succeed. This has implications for how to design search features. Traditional search doesn’t assume a dialog, and so it uses relevance ranking to get its best guess to the top of the results list. It calculates many relevance factors for each query, like word frequency, distance, and meaning, and then reduces those many factors to a single score based on a proprietary “black box” formula. But how can a business users, searching, act on the information that the document is say only 38.1% relevant? In contrast, exploratory search gives users the opportunity to clarify what is relevant to them through refinements and summaries. This approach has received consumer endorsement through popular ecommerce sites where guided navigation across a broad range of products has helped consumers better discover choices that meet their, sometimes undetermined requirements. This same model exists in Oracle Endeca Information Discovery. In fact, the Endeca Server powers many of the most popular e-commerce sites in the world. The Endeca Server Supports Cascading Relevance. Traditional approaches of search reduce many relevance weights to a single score. This means that if a result with a good title match gets a similar score to one with an exact phrase match, they’ll appear next to each other in a list. But a user can’t deduce from their score why each got it’s ranking, even though that information could be valuable. Oracle Endeca Information Discovery takes a different approach. The Endeca Server stratifies results by a primary relevance strategy, and then breaks ties within a strata by ordering them with a secondary strategy, and so on. Application managers get the explicit means to compose these strategies based on their knowledge of their own domain. This approach gives both business users and managers a deterministic way to set and understand relevance. Now that you have an understanding of two of the core search capabilities in Oracle Endeca Information Discovery, our next blog post on this topic will discuss more advanced features including set search, second-order relevance as well as an understanding of faceted search mechanisms that include queries and filters.  

    Read the article

  • What Counts For a DBA – Depth

    - by Louis Davidson
    SQL Server offers very simple interfaces to many of its features. Most people could open up SSMS, connect to a server, write a simple query and see the results. Even several of the core DBA tasks are deceptively straightforward. It doesn’t take a rocket scientist to perform a basic database backup or run a trace (even using the newfangled Extended Events!). However, appearances can be deceptive, and often times it is really important that a DBA understands not just the basics of how to perform a task, but why we do a task, and how that task works. As an analogy, consider a child walking into a darkened room. Most would know that they need to turn on the light, and how to do it, so they flick the switch. But what happens if light fails to shine forth. Most would immediately tell you that you need to consider changing the light bulb. So you hop in the car and take them to the local home store and instruct them to buy a replacement. Confronted with a 40 foot display of light bulbs, how will they decide which of the hundreds of types of bulbs, of different types, fittings, shapes, colors, power and efficiency ratings, is the right choice? Obviously the main lesson the child is going to learn this day is how to use their cell phone as a flashlight so they don’t have to ask for help the next time. Likewise, when the metaphorical toddlers who use your database server have issues, they will instinctively know something is wrong, and may even have some idea what caused it, but will have no depth of knowledge to figure out the right solution. That is where the DBA comes in and attempts to save the day. However, when one looks beneath the shiny UI, SQL Server has its own “40 foot display of light bulbs”, in the form of the tremendous number of tools and the often-bewildering amount of information they can present to the DBA, to help us find issues. Unfortunately, resorting to guesswork, to trying different “bulbs” over and over, hoping to stumble on the answer. This is where the right depth of knowledge goes a long way. If we need to write a SELECT statement, then knowing the syntax and where to find the data is not enough. Knowledge of indexes and query plans is essential. Without it, we might hit on a query that “works”, but we are basically still a user, not a programmer, because we have no real control over our platform. Is that level of knowledge deep enough? Probably not, since knowledge of the underlying metadata and structures would be very useful in helping us make sense of any query plan. Understanding the structure of an index makes the “key lookup” operator not sound like what you do when someone tapes your car key to the ceiling. So is even this level of understanding deep enough? Do we need to understand the memory architecture used to process the query? It might be a comforting level of knowledge, and will doubtless come in handy at some point, but is not strictly necessary in most cases. Beyond that lies (more or less) full knowledge of SQL language and the intricacies of every step the SQL Server engine takes to process our query. My personal theory is that, as a professional, our knowledge of a given task should extend, at a minimum, one level deeper than is strictly necessary to perform the task. Anything deeper can be left to the ridiculously smart, or obsessive, or both. As an example. tasked with storing an integer value between 0 and 99999999, it’s essential that I know that choosing an Integer over Decimal(8,0) will likely offer performance benefits. It is then useful that I also understand the value of adding a CHECK constraint, to make sure the values are valid to the desired range; and comforting that I know a little about the underlying processors, registers and computer math. Anything further, I leave to the likes of Joe Chang, whose recent blog post on the topic offers depth by the bucketful!  

    Read the article

  • Resolving collisions between dynamic game objects

    - by TheBroodian
    I've been building a 2D platformer for some time now, I'm getting to the point where I am adding dynamic objects to the stage for testing. This has prompted me to consider how I would like my character and other objects to behave when they collide. A typical staple in many 2D platformer type games is that the player takes damage upon touching an enemy, and then essentially becomes able to pass through enemies during a period of invulnerability, and at the same time, enemies are able to pass through eachother freely. I personally don't want to take this approach, it feels strange to me that the player should receive arbitrary damage for harmless contact to an enemy, despite whether the enemy is attacking or not, and I would like my enemies' interactions between each other (and my player) to be a little more organic, so to speak. In my head I sort of have this idea where a game object (player, or non player) would be able to push other game objects around by manner of 'pushing' each other out of one anothers' bounding boxes if there is an intersection, and maybe correlate the repelling force to how much their bounding boxes are intersecting. The problem I'm experiencing is I have no idea what the math might look like for something like this? I'll show what work I've done so far, it sort of works, but it's jittery, and generally not quite what I would pass in a functional game: //Clears the anti-duplicate buffer collisionRecord.Clear(); //pick a thing foreach (GameObject entity in entities) { //pick another thing foreach (GameObject subject in entities) { //check to make sure both things aren't the same thing if (!ReferenceEquals(entity, subject)) { //check to see if thing2 is in semi-near proximity to thing1 if (entity.WideProximityArea.Intersects(subject.CollisionRectangle) || entity.WideProximityArea.Contains(subject.CollisionRectangle)) { //check to see if thing2 and thing1 are colliding. if (entity.CollisionRectangle.Intersects(subject.CollisionRectangle) || entity.CollisionRectangle.Contains(subject.CollisionRectangle) || subject.CollisionRectangle.Contains(entity.CollisionRectangle)) { //check if we've already resolved their collision or not. if (!collisionRecord.ContainsKey(entity.GetHashCode())) { //more duplicate resolution checking. if (!collisionRecord.ContainsKey(subject.GetHashCode())) { //if thing1 is traveling right... if (entity.Velocity.X > 0) { //if it isn't too far to the right... if (subject.CollisionRectangle.Contains(new Microsoft.Xna.Framework.Rectangle(entity.CollisionRectangle.Right, entity.CollisionRectangle.Y, 1, entity.CollisionRectangle.Height)) || subject.CollisionRectangle.Intersects(new Microsoft.Xna.Framework.Rectangle(entity.CollisionRectangle.Right, entity.CollisionRectangle.Y, 1, entity.CollisionRectangle.Height))) { //Find how deep thing1 is intersecting thing2's collision box; float offset = entity.CollisionRectangle.Right - subject.CollisionRectangle.Left; //Move both things in opposite directions half the length of the intersection, pushing thing1 to the left, and thing2 to the right. entity.Velocities.Add(new Vector2(-(((offset * 4) * (float)gameTime.ElapsedGameTime.TotalMilliseconds)), 0)); subject.Velocities.Add(new Vector2((((offset * 4) * (float)gameTime.ElapsedGameTime.TotalMilliseconds)), 0)); } } //if thing1 is traveling left... if (entity.Velocity.X < 0) { //if thing1 isn't too far left... if (entity.CollisionRectangle.Contains(new Microsoft.Xna.Framework.Rectangle(subject.CollisionRectangle.Right, subject.CollisionRectangle.Y, 1, subject.CollisionRectangle.Height)) || entity.CollisionRectangle.Intersects(new Microsoft.Xna.Framework.Rectangle(subject.CollisionRectangle.Right, subject.CollisionRectangle.Y, 1, subject.CollisionRectangle.Height))) { //Find how deep thing1 is intersecting thing2's collision box; float offset = subject.CollisionRectangle.Right - entity.CollisionRectangle.Left; //Move both things in opposite directions half the length of the intersection, pushing thing1 to the right, and thing2 to the left. entity.Velocities.Add(new Vector2((((offset * 4) * (float)gameTime.ElapsedGameTime.TotalMilliseconds)), 0)); subject.Velocities.Add(new Vector2(-(((offset * 4) * (float)gameTime.ElapsedGameTime.TotalMilliseconds)), 0)); } } //Make record that thing1 and thing2 have interacted and the collision has been solved, so that if thing2 is picked next in the foreach loop, it isn't checked against thing1 a second time before the next update. collisionRecord.Add(entity.GetHashCode(), subject.GetHashCode()); } } } } } } } } One of the biggest issues with my code aside from the jitteriness is that if one character were to land on top of another character, it very suddenly and abruptly resolves the collision, whereas I would like a more subtle and gradual resolution. Any thoughts or ideas are incredibly welcome and helpful.

    Read the article

  • Box2dx: Usage of World.QueryAABB?

    - by Rosarch
    I'm using Box2dx with C#/XNA. I'm trying to write a function that determines if a body could exist in a given point without colliding with anything: /// <summary> /// Can gameObject exist with start Point without colliding with anything? /// </summary> internal bool IsAvailableArea(GameObjectModel model, Vector2 point) { Vector2 originalPosition = model.Body.Position; model.Body.Position = point; // less risky would be to use a deep clone AABB collisionBox; model.Body.GetFixtureList().GetAABB(out collisionBox); // how is this supposed to work? physicsWorld.QueryAABB(x => true, ref collisionBox); model.Body.Position = originalPosition; return true; } Is there a better way to go about doing this? How is World.QueryAABB supposed to work? Here is an earlier attempt. It is broken; it always returns false. /// <summary> /// Can gameObject exist with start Point without colliding with anything? /// </summary> internal bool IsAvailableArea(GameObjectModel model, Vector2 point) { Vector2 originalPosition = model.Body.Position; model.Body.Position = point; // less risky would be to use a deep clone AABB collisionBox; model.Body.GetFixtureList().GetAABB(out collisionBox); ICollection<GameObjectController> gameObjects = worldQueryEngine.GameObjectsForPredicate(x => ! x.Model.Passable); foreach (GameObjectController controller in gameObjects) { AABB potentialCollidingBox; controller.Model.Body.GetFixtureList().GetAABB(out potentialCollidingBox); if (AABB.TestOverlap(ref collisionBox, ref potentialCollidingBox)) { model.Body.Position = originalPosition; return false; // there is something that will collide at this point } } model.Body.Position = originalPosition; return true; }

    Read the article

  • Is this a tableView issue or a CoreData Issue

    - by monotreme
    I have a CoreData-driven navigation app and I'm trying to figure out why It's crashing. I've got a hierarchy which is 3 view Controllers deep, all related by coredata relatioships, like this. TableViewA =relationship= TableViewB =relationship= TableViewC I'm honestly a novice at core data and I think my problem lies in the fetched results controller. I have one in TableViewA and another in TableViewB, and no matter how deep I go, the console always cites TableViewB's fetched results controller methods after a crash. Is this the problem? What's happening specifically is if I launch my app and drill down into the hierarchy of one record, let's call it Record1, I can delete sub records to my hearts content. Gone! no problem! But the second I go back to TableViewA and drill down into a different record, let's call that one Record2, and try to delete it's subrecords my app crashes, with the console citing this code from TableViewB as the problem. - (void)controllerWillChangeContent:(NSFetchedResultsController *)controller { // The fetch controller is about to start sending change notifications, so prepare the table view for updates. [self.tableView beginUpdates]; } When I go into the debugger, the specific method it always has a problem with is: if (![x.managedObjectContext save:&error]) { NSLog(@"Unresolved error %@, %@", error, [error userInfo]); abort(); } Just a confirmation of my idiocy with CoreData is all I'm looking for I think. Oh and how many ManagedObjectContexts should I have in an app of this type. I've been told I should have separate ones for adding content, which then should re-integrate into the main one. Is this true? Thanks!

    Read the article

  • Theory of formal languages - Automaton

    - by dader51
    Hi everybody ! I'm wondering about formal languages. I have a kind of parser : It reads à xml-like serialized tree structure and turn it into a multidimmensionnal array. I figured out that i need at least three variables to achieve the job : $tree = array(); // a new array $pTree = array(&$tree); // a new array which the first element points to $tree; $deep = 0; plus the one containing the sentence splitted into words. My point is on the similarities between the algorithm deing used and the differents kinds of automatons ( state machines turing machines stack ... ). The $words variable is the "tape" of the automaton, the test/conditions of the algorithm are transitions, $deep is the state and $tree is the output. I cannont figure what is $pTree. So the question is : which is the automaton I implictly use here, and to which formal languages family does it fit ? And what's about recursion ?

    Read the article

  • How can I use Hibernate Criteria's to query nested tables?

    - by cbmeeks
    I've looked all over SO and Google but I guess I'm not using the right search terms or something. Anyway, say I have three tables: Companies ----------------------------------------- id name user_id Users ----------------------------------------- id username usertype_id UserTypes ----------------------------------------- id typeofuser So ACME would be a company, it would have a user Moe and Moe would be a usertype of Stooge. In SQL, I would do something like: select * from companies c join users u on (u.id = c.user_id) join usertypes ut on (ut.id = u.usertype_id) where ut.typeofuser = 'Stooge' But I can't seem to figure out how to do that in a Criteria. I have tried: Criteria crit = io.getSession().createCriteria(Company.class); List<Company> list = crit.createCriteria("users") .createCriteria("usertypes") .add(Restriction.eq("typeofuser", "Stooge").list(); But I get back way too many records. And the results don't even come close to being accurate. I've also tried: Criteria crit = io.getSession().createCriteria(Company.class); List<Company> list = crit.createAlias("users", "u") .createAlias("u.usertypes", "ut") .add(Restriction.eq("ut.typeofuser", "Stooge").list(); Seems to bring back the exact same result set. I actually have read the user manual. And when I nest only one level deep (ie, searching by users is fine) but when I get two layers deep, I can't quite get it. And the manual is no help. I just can't relate cats and kittens to business objects. Maybe they should use cats, kittens and fleas? :-/ Thanks for any suggestions.

    Read the article

  • Connecting grouped dots/points on a scatter plot based on distance

    - by ToNoY
    I have 2 sets of depth point measurements, for example: > a depth value 1 2 2 2 4 3 3 6 4 4 8 5 5 16 40 6 18 45 7 20 58 > b depth value 1 10 10 2 12 20 3 14 35 I want to show both groups in one figure plotted with depth and with different symbols as you can see here plot(a$value, a$depth, type='b', col='green', pch=15) points(b$value, b$depth, type='b', col='red', pch=14) The plot seems okay, but the annoying part is that the green symbols are all connected (though I want connected lines also). I want connection only when one group has a continued data points at 2 m interval i.e. the symbols should be connected with a line from 2 to 8 m (green) and then group B symbols should be connected from 10-14 m (red) and again group A symbols should be connected (green), which means I do NOT want to see the connection between 8 m sample with the 16 m for group A. An easy solution may be dividing the group A into two parts (say, A-shallow and A-deep) and then plotting A-shallow, B, and A-deep separately. But this is completely impractical because I have thousands of data points with hundreds of groups i.e. I have to produce many depth profiles. Therefore, there has to be a way to program so that dots are NOT connected beyond a prescribed frequency/depth interval (e.g. 2 m in this case) for a particular group of samples. Any idea?

    Read the article

  • How to retrieve data from a dialog box?

    - by Ralph
    Just trying to figure out an easy way to either pass or share some data between the main window and a dialog box. I've got a collection of variables in my main window that I want to pass to a dialog box so that they can be edited. They way I've done it now, is I pass in the list to the constructor of the dialog box: private void Button_Click(object sender, RoutedEventArgs e) { var window = new VariablesWindow(_templateVariables); window.Owner = this; window.ShowDialog(); if(window.DialogResult == true) _templateVariables = new List<Variable>(window.Variables); } And then in there, I guess I need to deep-copy the list, public partial class VariablesWindow : Window { public ObservableCollection<Variable> Variables { get; set; } public VariablesWindow(IEnumerable<Variable> vars) { Variables = new ObservableCollection<Variable>(vars); // ... So that when they're edited, it doesn't get reflected back in the main window until the user actually hits "Save". Is that the correct approach? If so, is there an easy way to deep-copy an ObservableCollection? Because as it stands now, I think my Variables are being modified because it's only doing a shallow-copy.

    Read the article

  • Persisting object changes from child form to parent form based on button press.

    - by Shyran
    I have created a form that is used for both adding and editing a custom object. Which mode the form takes is provided by an enum value passed from the calling code. I also pass in an object of the custom type. All of my controls at data bound to the specific properties of the custom object. When the form is in Add mode, this works great as when the controls are updated with data, the underlying object is as well. However, in Edit mode, I keep two variables of the custom object supplied by the calling code, the original, and a temporary one made through deep copying. The controls are then bound to the temporary copy, this makes it easy to discard the changes if the user clicks the Cancel button. What I want to know is how to persist those changes back to the original object if the user clicks the OK button, since there is now a disconnect because of the deep copying. I am trying to avoid implementing a internal property on the Add/Edit form if I can. Below is an example of my code: public AddEditCustomerDialog(Customer customer, DialogMode mode) { InitializeComponent(); InitializeCustomer(customer, mode); } private void InitializeCustomer(Customer customer, DialogMode mode) { this.customer = customer; if (mode == DialogMode.Edit) { this.Text = "Edit Customer"; this.tempCustomer = ObjectCopyHelper.DeepCopy(this.customer); this.customerListBindingSource.DataSource = this.tempCustomer; this.phoneListBindingSource.DataSource = this.tempCustomer.PhoneList; } else { this.customerListBindingSource.DataSource = this.customer; this.phoneListBindingSource.DataSource = this.customer.PhoneList; } }

    Read the article

  • Iterating Through N Level Children

    - by bobber205
    This seems like something neat that might be "built into" jQuery but I think it's still worth asking. I have a problem where that can easily be solved by iterating through all the children of a element. I've recently discovered I need to account for the cases where I would need to do a level or two deeper than the "1 level" (just calling .children() once) I am currently doing. jQuery.each(divToLookAt.children(), function(index, element) { //do stuff } ); This is what I'm current doing. To go a second layer deep, I run another loop after doing stuff code for each element. jQuery.each(divToLookAt.children(), function(index, element) { //do stuff jQuery.each(jQuery(element).children(), function(indexLevelTwo, elementLevelTwo) { //do stuff } ); } ); If I want to go yet another level deep, I have to do this all over again. This is clearly not good. I'd love to declare a "level" variable and then have it all take care of. Anyone have any ideas for a clean efficient jQueryish solution? Thanks!

    Read the article

  • nothing in dev folder

    - by 4321bust
    hi, i'm new to this so bear with me plz. im attempting to set up git on my mac and need to be using my dev folder. however, there seems to be nothing in my folder ("zero KB on disk") with no sub directories listed. other hidden directories are intact. i've never really gone this deep into things before so i'm not sure how/why anything would previously have been deleted. any help greatly appreciated.

    Read the article

  • Is there anything better then Microsoft Project?

    - by GuruAbyss
    I'll soon be knee deep into a very large project and I'm looking into project management software. I need users opinions on software based (no web based) solutions that are equal or better then MS Project. It can be open source or closed source. Thank you all in advanced for your insight and opinions!

    Read the article

  • Is there anything better than Microsoft Project? [closed]

    - by GuruAbyss
    Possible Duplicate: Project Planning Tools I'll soon be knee-deep into a very large project and I'm looking into project management software. I need users opinions on software based (no web based) solutions that are equal or better than MS Project. It can be open source or closed source. Thank you all in advanced for your insight and opinions!

    Read the article

  • Remove folder structure from archive and fix error

    - by Michael
    I am trying to make a script to backup each of my plesk hosts to individual files, I am having two problems: I would like to remove the folder structure from archive, the tar is 3 folders deep I am getting this error: tar: Removing leading `/' from member names The code: FILES=/var/www/vhosts/* FNAME="" for f in $FILES do FNAME=`basename $f` tar cfv "/root/backup/ftp/$FNAME.tar" $f done Sample output: tar: Removing leading `/' from member names /var/www/vhosts/mydomain.com/ /var/www/vhosts/mydomain.com/conf /var/www/vhosts/mydomain.com/etc/ /var/www/vhosts/mydomain.com/etc/group /var/www/vhosts/mydomain.com/etc/termcap /var/www/vhosts/mydomain.com/etc/passwd /var/www/vhosts/mydomain.com/usr/

    Read the article

  • Migrate IMAP account between providers - client access only

    - by Pekka
    I have an IMAP E-Mail account with my old provider. I have a new, empty IMAP account with the new provider. Is there a tool or Thunderbird to migrate the E-Mail data from one account to another? I'm a bit wary about just doing a drag & drop in Thunderbird because it's quite a lot of data, and I have a deep distrust against how Thunderbird deals with IMAP data.

    Read the article

  • Software to store frequently used text in PC

    - by user15660
    Hi, I am a looking for a free software that can run on the task bar (near the system time) where I can store frequently used text like my full street address, paths of specific deep folders & files in the computer etc etc. This way I can just click the icon which should popup a screen where I should be able to copy the text/string I am looking for Any ideas? thanks in advance

    Read the article

  • How remove/de-index a page from Google?

    - by Jason
    On the results page when I Google "e-luminate", the 3rd and 4th link seems to point to specific directory deep within the folders which stores the images. How can I get rid of these 2 results from Google search results? How can I get Google to de-index it? I checked on the server and the folders did not seem different from other folders but these 2 paths seems to get indexed by Google. Thank you.

    Read the article

  • Is it possible to prevent the win7 sleep state while using spotify?

    - by Skadlig
    Does anyone know if there is a way to prevent windows 7 to go to sleep while using Spotify? I have read the answers in this question but if it's possible I'd rather not resort to start a third party program like insomnia every time I want to listen to music. So are there a setting or a registry entry buried somewhere deep in windows that allows you to do this? Either for a group like "all audio" or for specific programs?

    Read the article

< Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >