Search Results

Search found 10556 results on 423 pages for 'practical approach'.

Page 215/423 | < Previous Page | 211 212 213 214 215 216 217 218 219 220 221 222  | Next Page >

  • Preventing Users From Copying Text From and Pasting It Into TextBoxes

    Many websites that support user accounts require users to enter an email address as part of the registration process. This email address is then used as the primary communication channel with the user. For instance, if the user forgets her password a new one can be generated and emailed to the address on file. But what if, when registering, a user enters an incorrect email address? Perhaps the user meant to enter [email protected], but accidentally transposed the first two letters, entering [email protected]. How can such typos be prevented? The only foolproof way to ensure that the user's entered email address is valid is to send them a validation email upon registering that includes a link that, when visited, activates their account. (This technique is discussed in detail in Examining ASP.NET's Membership, Roles, and Profile - Part 11.) The downside to using a validation email is that it adds one more step to the registration process, which will cause some people to bail out on the registration process. A simpler approach to lessening email entry errors is to have the user enter their email address twice, just like how most registration forms prompt users to enter their password twice. In fact, you may have seen registration pages that do just this. However, when I encounter such a registration page I usually avoid entering the email address twice, but instead enter it once and then copy and paste it from the first textbox into the second. This behavior circumvents the purpose of the two textboxes - any typo entered into the first textbox will be copied into the second. Using a bit of JavaScript it is possible to prevent most users from copying text from one textbox and pasting it into another, thereby requiring the user to type their email address into both textboxes. This article shows how to disable cut and paste between textboxes on a web page using the free jQuery library. Read on to learn more! Read More >

    Read the article

  • How to present a stable data model in a public API that allows internal data structures to be changed without breaking the public view of the data?

    - by Max Palmer
    I am in the process of developing an application that allows users to write C# scripts. These scripts allow users to call selected methods and to access and manipulate data in a document. This works well, however, in the development version, scripts access the document's (internal) data structures directly. This means that if we were to change the internal data model/structure, there is a good chance that someone's script will no longer compile. We obviously want to prevent this breaking change from happening, but still want to allow the user to write sensible C# code (whilst not restricting how we develop our internal data model as a result). We therefore need to decouple our scripting API and its data structures from our internal methods and data structures. We've a few ideas as to how we might allow the user to access a what is effectively a stable public version of the document's internal data*, but I wanted to throw the question out there to someone who might have some real experience of this problem. NB our internal document's data structure is quite complex and it could be quite difficult to wrap. We know we want to expose as little as possible in our public API, especially as once it's out there, it's out there for good. Can anyone help? How do scripting languages / APIs decouple their public API and data structures from their internal data structures? Is there no real alternative to having to write a complex interaction layer? If we need to do this, what's a good approach or pattern for wrapping complex data structures that include nested objects, including collections? I've looked at the API facade pattern, which looks like it's trying to address these kinds of issues, but are there alternatives? *One idea is to build a data facade that is kept stable across versions of our application. The facade exposes a set of facade data objects that are used in the script code. These maintain backwards compatibility and wrap access to our internal document's data model.

    Read the article

  • Automatic Appointment Conflict Resolution

    - by Thomas
    I'm trying to figure out an algorithm for resolving appointment times. I currently have a naive algorithm that pushes down conflicting appointments repeatedly, until there are no more appointments. # The appointment list is always sorted on start time appointment_list = [ <Appointment: 10:00 -> 12:00>, <Appointment: 11:00 -> 12:30>, <Appointment: 13:00 -> 14:00>, <Appointment: 13:30 -> 14:30>, ] Constraints are that appointments: cannot be after 15:00 cannot be before 9:00 This is the naive algorithm for i, app in enumerate(appointment_list): for possible_conflict in appointment_list[i+1:]: if possible_conflict.start < app.end: difference = app.end - possible_conflict.start possible_conflict.end += difference possible_conflict.start += difference else: break This results in the following resolution, which obviously breaks those constraints, and the last appointment will have to be pushed to the following day. appointment_list = [ <Appointment: 10:00 -> 12:00>, <Appointment: 12:00 -> 13:30>, <Appointment: 13:30 -> 14:30>, <Appointment: 14:30 -> 15:30>, ] Obviously this is sub-optimal, It performs 3 appointment moves when the confict could have been resolved with one: if we were able to push the first appointment backwards, we could avoid moving all the subsequent appointments down. I'm thinking that there should be a sort of edit-distance approach that would calculate the least number of appointments that should be moved in order to resolve the scheduling conflict, but I can't get the a handle on the methodology. Should it be breadth-first or depth first solution search. When do I know if the solution is "good enough"?

    Read the article

  • ArchBeat Link-o-Rama for 101/10/2011

    - by Bob Rhubart
    All day, all architecture. Oracle Technology Network Architect Day - Phoenix, AZ - Dec 14. Free registration. Spend the day with your peers learning from Oracle experts in Cloud Computing, Engineered Systems, Oracle WebLogic, Oracle Coherence, Application-Driven Virtualization, and more. Registration is free, but seating is limited. Register now! Data Integration - Bad data is really the monster | Bikram Sinha "Bad data can cause huge operational failure and cost millions of dollars in terms of time and resources to clean up and validate data across multiple participating systems," says Bikram Sinha. Changing a navigation model on a page in WebCenter | Edwin Biemond Another illustrated how-to from Oracle ACE Edwin Biemond. Why do I need an Authenticator when I have an Identity Asserter? | Chris Johnson Chris Johnson responds to a user question. OOW: The Most Important Thing | Floyd Teter Oracle ACE Director Floyd Teter explains why he sees "the inclusion of Fusion Applications CRM and HCM in the Oracle Public Cloud" as the most important news to come out of Oracle OpenWorld 2011. Oracle Releases Oracle Solaris 11 | Gokhan Atil Atil offers an overview of some of the "key points" of the new Solaris 11 release. SOA Development Virtual Developer Day (On Demand) You won't get the hands-on experience available in the live event, but if you will learn learn how a SOA approach can be implemented, whether starting afresh with new services or reusing existing services. Webcast: Maximum Availability on Private Clouds - Nov 10 - 10am PT/ 1pm ET Featuring Margaret Hamburger (Director, Product Marketing, Oracle) and Joe Meeks (Director, Product Management, Oracle). Should Enterprise Architecture Teams Be More Focused on Innovation? | Richard Seroter Richard Seroter looks answers among opinions offered by Forrester analyst Brian Hopkins and Jude Umeh of CapGemini.

    Read the article

  • Multiple sites with the same codebase in Python

    - by Jimmy
    I am trying to run a large amount of sites which share about 90% of their code. They are simply designed to query an API and return the results. They will have a common userbase / database but will be configured slightly different and will have different CSS (perhaps even different templating). My initial idea was to run them as separate applications with a common library but I have read about the sites framework which would allow them to run from a single instance of Django which may help to reduce memory usage. https://docs.djangoproject.com/en/dev/ref/contrib/sites/ Is the site framework the right approach to a problem like this, and does it have real benefits over running separate applications? Initially I thought it was, but now I think otherwise. I have heard the following: Your SITE_ID is set in settings.py, so in order to have multiple sites, you need multiple settings.py configurations, which means multiple distinct processes/instances. You can of course share the code base between them, but each site will need a dedicated worker / WSGIDaemon to serve the site. This effectively removes any benefit of running multiple sites under one hood, if each site needs a UWSGI instance running. Alternative ideas of systems: https://github.com/iivvoo/django_layers https://github.com/shestera/django-multisite I don't know what route to be taking with this.

    Read the article

  • Who should write the test plan?

    - by Cheng Kiang
    Hi, I am in the in-house development team of my company, and we develop our company's web sites according to the requirements of the marketing team. Before releasing the site to them for acceptance testing, we were requested to give them a test plan to follow. However, the development team feels that since the requirements came from the requestors, they would have the best knowledge of what to test, what to lookout for, how things should behave etc and a test plan is thus not required. We are always in an argument over this, and developers find it a waste of time to write down things like:- Click on button A. Key in XYZ in the form field and click button B. You should see behaviour C. which we have to repeat for each requirement/feature requested. This is basically rephrasing what's already in the requirements document. We are moving towards using an Agile approach for managing our projects and this is also requested at the end of each iteration. Unit and integration testing aside, who should be the one to come up with the end user acceptance test plan? Should it be the reqestors or the developers? Many thanks in advance. Regards CK

    Read the article

  • My father is a doctor. He is insisting on writing a database to store non-critical patient information, with no programming background

    - by Dominic Bou-Samra
    So, my father is currently in the process of "hacking" together a database using FileMaker Pro, a GUI based databasing tool for his small (4 doctor) practice. The database will be used to help ease the burden on reporting from medical machines, streamlining quite a clumsy process. He's got no programming background, and seems to be doing everything in his power to not learn things correctly. He's got duplicate data types, no database-enforced relationships (foreign/primary key constraints) and a dozen other issues. He's doing it all by hand via GUI tool using Youtube videos. My issue is, that whilst I want him to succeed 100%, I don't think it's appropriate for him to be handling these types of decisions. How do I convince him that without some sort of education in these topics, a hacked together solution is a bad idea? He's can be quite stubborn and I think he sees these types of jobs as "childs play" How should I approach this? Is it even that bad an idea - or am I correct in thinking he should hire a proper DBA/developer to handle this so that it doesn't become a maintenance nightmare? NB: I am a developer consultant of 4 years and I've seen my share of painful customer implementations.

    Read the article

  • Dependency injection: what belongs in the constructor?

    - by Adam Backstrom
    I'm evaluating my current PHP practices in an effort to write more testable code. Generally speaking, I'm fishing for opinions on what types of actions belong in the constructor. Should I limit things to dependency injection? If I do have some data to populate, should that happen via a factory rather than as constructor arguments? (Here, I'm thinking about my User class that takes a user ID and populates user data from the database during construction, which obviously needs to change in some way.) I've heard it said that "initialization" methods are bad, but I'm sure that depends on what exactly is being done during initialization. At the risk of getting too specific, I'll also piggyback a more detailed example onto my question. For a previous project, I built a FormField class (which handled field value setting, validation, and output as HTML) and a Model class to contain these fields and do a bit of magic to ease working with fields. FormField had some prebuilt subclasses, e.g. FormText (<input type="text">) and FormSelect (<select>). Model would be subclassed so that a specific implementation (say, a Widget) had its own fields, such as a name and date of manufacture: class Widget extends Model { public function __construct( $data = null ) { $this->name = new FormField('length=20&label=Name:'); $this->manufactured = new FormDate; parent::__construct( $data ); // set above fields using incoming array } } Now, this does violate some rules that I have read, such as "avoid new in the constructor," but to my eyes this does not seem untestable. These are properties of the object, not some black box data generator reading from an external source. Unit tests would progressively build up to any test of Widget-specific functionality, so I could be confident that the underlying FormFields were working correctly during the Widget test. In theory I could provide the Model with a FieldFactory() which could supply custom field objects, but I don't believe I would gain anything from this approach. Is this a poor assumption?

    Read the article

  • Be liberal in what you accept... or not?

    - by Matthieu M.
    [Disclaimer: this question is subjective, but I would prefer getting answers backed by facts and/or reflexions] I think everyone knows about the Robustness Principle, usually summed up by Postel's Law: Be conservative in what you send; be liberal in what you accept. I would agree that for the design of a widespread communication protocol this may make sense (with the goal of allowing easy extension), however I have always thought that its application to HTML / CSS was a total failure, each browser implementing its own silent tweak detection / behavior, making it near impossible to obtain a consistent rendering across multiple browsers. I do notice though that there the RFC of the TCP protocol deems "Silent Failure" acceptable unless otherwise specified... which is an interesting behavior, to say the least. There are other examples of the application of this principle throughout the software trade that regularly pop up because they have bitten developpers, from the top off my head: Javascript semi-colon insertion C (silent) builtin conversions (which would not be so bad if it did not truncated...) and there are tools to help implement "smart" behavior: name matching phonetic algorithms (Double Metaphone) string distances algorithms (Levenshtein distance) However I find that this approach, while it may be helpful when dealing with non-technical users or to help users in the process of error recovery, has some drawbacks when applied to the design of library/classes interface: it is somewhat subjective whether the algorithm guesses "right", and thus it may go against the Principle of Least Astonishment it makes the implementation more difficult, thus more chances to introduce bugs (violation of YAGNI ?) it makes the behavior more susceptible to change, as any modification of the "guess" routine may break old programs, nearly excluding refactoring possibilities... from the start! And this is what led me to the following question: When designing an interface (library, class, message), do you lean toward the robustness principle or not ? I myself tend to be quite strict, using extensive input validation on my interfaces, and I was wondering if I was perhaps too strict.

    Read the article

  • geomipmapping using displacement mapping (and glVertexAttribDivisor)

    - by Will
    I wake up with a clear vision, but sadly my laptop card doesn't do displacement mapping nor glVertexAttribDivisor so I can't test it out; I'm left sharing here: With geomipmapping, the grid at any factor is transposable - if you pass in an offset - say as a uniform - you can reuse the same vertex and index array again and again. If you also pass in the offset into the heightmap as a uniform, the vertex shader can do displacement mapping. If the displacement map is mipmapped, you get the advantages of trilinear filtering for distant maps. And, if the scenery is closer, rather than exposing that the you have a world made out of quads, you can use your transposable grid vertex array and indices to do vertex-shader interpolation (fancy splines) to do super-smooth infinite zoom? So I have some questions: does it work? In theory, in practice? does anyone do it? Does this technique have a name? Papers, demos, anything I can look at? does glVertexAttribDivisor mean that you can have a single glMultiDrawElementsEXT or similar approach to draw all your terrain tiles in one call rather than setting up the uniforms and emitting each tile? Would this offer any noticeable gains? does a heightmap that is GL_LUMINANCE take just one byte per pixel(=vertex)? (On mainstream cards, obviously. Does storage vary in practice?) Does going to the effort of reusing the same vertices and indices mean that you can basically fill the GPU RAM with heightmap and not a lot else, giving you either bigger landscapes or more detailed landscapes/meshes for the same bang? is mipmapping the displacement map going to work? On future cards? Is it going to introduce unsurmountable inaccuracies if it is enabled?

    Read the article

  • Can I assume interface oriented programming as a good object oriented programming?

    - by david
    I have been programming for decades but I have not been used to object oriented programming. But for recenet years, I had a great opportunity to learn OOP, its principles, and a lot of patterns that are great. Since I've learned OOP, I tried to apply them to a couple of projects and found those projects successful. Unfortunately I didn't follow extreme programming that suggests writing test first, mainly because their time frame were tight. What I did for those projects were Identify all necessary classes and create them with proper properties and methods whenever there is dependency between classes, write interface between them see if there is any patterns for certain relationships between classes to replace By successful, I meant that it was quick development effort, the classes can be reused better, and flexible enough so that another programmer does not have to change something else to fix another part. But I wonder if this is a good practice. Of course, I know I need to put writing unit tests first in my work process. But other than that, is there any problem with this approach - creating lots of interfaces - in long term?

    Read the article

  • How to simulate pressure with particles?

    - by BeachRunnerJoe
    I'm trying to simulate pressure with a collection of spherical particles in a Unity game I'm building. A couple notes about the problem: The goal is to fill a constantly changing 2d space/void with small, frictionless spheres. The game is trying to simulate the ever-growing pressure of more objects being shoved into this space. The level itself is constantly scrolling from left to right, meaning if the space's dimensions are not changed by the user it will automatically get smaller (the leftmost part of the space will continually scroll off-screen). I'm wondering what some approaches are that I can take to tackling these problems... Knowing when to detect when there is space to fill and then add spheres to the space. Removing spheres from the space when it is shrinking. Strategies to simulate pressure on the spheres such that they "explode outwards" when more space is created. The current approach I am contemplating is using a constantly moving wall, that is off screen and moves with the screen, as this image illustrates: . This moving wall will push and trap the spheres into the space. As for adding new spheres, I was going to have either (1) spheres replicate themselves upon detecting free space, OR (2) spawn them at the left side of the space (where the wall is) - pushing the rest of the spheres to fill the space. I foresee problems with idea #1 because this likely wouldn't really create/simulate pressure; idea #2 seems more promising, but raises the question of how to provide a location for these new sphere particles to spawn (and the ramifications of spawning them when there IS no space). Thanks so much in advance for your wisdom!

    Read the article

  • session management: verifying a user's log-in state

    - by good_computer
    I am storing sessions in my database. Everytime a user logs in, I create a new row corresponding to the new session, generate a new session id and send it as a cookie to the browser. My session data looks something like this: { 'user_id': 1234 'user_name': 'Sam' ... } When a request comes, I check whether a cookie with a session id is sent. If it is, I fetch session data from my database (or memcache) corresponding to that session id. When the user logs out, I remove the session data from my database (and memcache), and delete the cookie from the user's browser too. Notice that in my session data, I don't have something like logged_in: true. This is because if I find a session record in the database (or memcache) I deduce that the user is logged in, and if there is no session record found, the user is not logged in. My question is: is this the right approach? Should I have a logged_in key in my session data? Is there any possibility that a session record may be present on the server where the corresponding user is actually NOT logged in? Are there any security implications in having or not having such a key?

    Read the article

  • Revolutionizing Digital Commerce

    - by bwalstra
    The confluence of the Internet, the pace of change in technology, and the demands of the value-conscious consumer are accelerating the evolution of the global digital marketplace at an unprecedented rate. Success in the new digital economy has become inextricably linked with the agility to launch innovative products, services, and new business models efficiently with minimal risk. A major obstacle to agility, and by extension to success in digital commerce, is the fact that by and large information technology (IT) infrastructure is tightly coupled with particular business models. Enterprises, through well intentioned but misconstrued costsaving belief, continue to customize existing infrastructure and create now silos to support new business models. In reality, this approach results in rigid, inflexible business processes and exposes the enterprise to unnecessary risks, higher opportunity costs, and lower profit margins. Oracle, a leading supplier of business solutions to the enterprise, is enabling the business strategies necessary to succeed in the digital economy by offering a modern, open, modular, and functionally comprehensive revenue management solution that decouples IT infrastructure from business models. Enterprises using the Oracle solution are able to focus on core competencies and innovate unimpeded, assuring that business and IT systems will seamlessly adapt to changing conditions of the digital economy. Revolutionizing Digital Commerce:  An Oracle Revenue Management Solution

    Read the article

  • How to proceed on the waypoint path?

    - by Alpha Carinae
    I'm using Dijkstra algorithm to find shortest path and I'm drawing this path on the screen. As the character object moves on, path updates itself(shortens as the object approaches the target and gets longer as the object moves away from it.) I tried to visualize my problem. This is the beginning state. 'A' node is the target, path is the blue and the object is the green one. I draw this path, from object to the closest node. In this case my problem occurs. Because 'D' node is more closer to the object than 'C' node, something like this happens: So, how can i decide that the object passed the 'D' node? Path should be look like this: One thing comes to my mind is that I use some distance variables between the two closest nodes in the route path. (In this example these are 'C' and 'D' nodes.) As the object approaches 'C' and moves away from the 'D' node at the same time, this means character passed the 'D'. However, I think there are some standardized and easy ways to solve this. What approach should I take?

    Read the article

  • Advice: How to convince my newly annointed team lead against writing the code base from scratch

    - by shan23
    I work in a pretty reknowned MNC, and the module that I work in has been assigned to a new "lead". The code base is pretty huge (~130K or more, with inter dependencies on other modules) , but stable - some parts have grown ugly over the years, but its provably in working state. (Our products are running for years on them, even new ones). The problem is, our lead wants to rewrite the code from scratch, to encompass "finer granularity and a proactive design". I know in my guts thats not a very good idea, but how do I convince him/the rest of the team(who are pretty much more senior than me in terms of years of exp), without sounding too pedantic myself (Thou shalt not rewrite , as Joel et al have clear articles prohibiting it)? I have a good working relation with the person concerned, and don't want to ruin it, but neither do I want to be party to a decision which would surely plague us for years to come !! Any suggestions for a milder,yet effective approach ? Even accounts of how you have tackled such a situation to your liking would help me a lot! EDIT: The code base I'm talking about is not a product/GUI, but at kernel level with all the critical functionalities for our product. I hope now you know why i sound so apprehensive !!

    Read the article

  • SOA Starting Point: Methods for Service Identification and Definition

    As more and more companies start to incorporate a Service Oriented Architectural design approach into their existing enterprise systems, it creates the need for a standardized integration technology. One common technology used by companies is an Enterprise Service Bus (ESB). An ESB, as defined by Progress Software, connects and mediates all communications and interactions between services. In essence an ESB is a form of middleware that allows services to communicate with one another regardless of framework, environment, or location. With the emergence of ESB, a new emphasis is now being placed on approaches that can be used to determine what Web services should be built. In addition, what order should these services be built? In May 2011, SOA Magazine published an article that identified 10 common methods for identifying and defining services. SOA’s Ten Common Methods for Service Identification and Definition: Business Process Decomposition Business Functions Business Entity Objects Ownership and Responsibility Goal-Driven Component-Based Existing Supply (Bottom-Up) Front-Office Application Usage Analysis Infrastructure Non-Functional Requirements  Each of these methods provides various pros and cons in regards to their use within the design process. I personally feel that during a design process, multiple methodologies should be used in order to accurately define a design for a system or enterprise system. Personally, I like to create a custom cocktail derived from combining these methodologies in order to ensure that my design fits with the project’s and business’s needs while still following development standards and guidelines. Of these ten methods, I am particularly fond of Business Process Decomposition, Business Functions, Goal-Driven, Component-Based, and routinely use them in my designs.  Works Cited Hubbers, J.-W., Ligthart, A., & Terlouw , L. (2007, 12 10). Ten Ways to Identify Services. Retrieved from SOA Magazine: http://www.soamag.com/I13/1207-1.php Progress.com. (2011, 10 30). ESB ARCHITECTURE AND LIFECYCLE DEFINITION. Retrieved from Progress.com: http://web.progress.com/en/esb-architecture-lifecycle-definition.html

    Read the article

  • Fixed-Function vs Shaders: Which for beginner?

    - by Rob Hays
    I'm currently going to college for computer science. Although I do plan on utilizing an existing engine at some point to create a small game, my aim right now is towards learning the fundamentals: namely, 3D programming. I've already done some research regarding the choice between DirectX and OpenGL, and the general sentiment that came out of that was that whether you choose OpenGL or DirectX as your training-wheels platform, a lot of the knowledge is transferrable to the other platform. Therefore, since OpenGL is supported by more systems (probably a silly reason to choose what to learn), I decided that I'm going to learn OpenGL first. After I made this decision to learn OpenGL, I did some more research and found out about a dichotomy that I was somewhere unaware of all this time: fixed-function OpenGL vs. modern programmable shader-based OpenGL. At first, I thought it was an obvious choice that I should choose to learn shader-based OpenGL since that's what's most commonly used in the industry today. However, I then stumbled upon the very popular Learning Modern 3D Graphics Programming by Jason L. McKesson, located here: http://www.arcsynthesis.org/gltut/ I read through the introductory bits, and in the "About This Book" section, the author states: "First, much of what is learned with this approach must be inevitably abandoned when the user encounters a graphics problem that must be solved with programmability. Programmability wipes out almost all of the fixed function pipeline, so the knowledge does not easily transfer." yet at the same time also makes the case that fixed-functionality provides an easier, more immediate learning curve for beginners by stating: "It is generally considered easiest to teach neophyte graphics programmers using the fixed function pipeline." Naturally, you can see why I might be conflicted about which paradigm to learn: Do I spend a lot of time learning (and then later unlearning) the ways of fixed-functionality, or do I choose to start out with shaders? My primary concern is that modern programmable shaders somehow require the programmer to already understand the fixed-function pipeline, but I doubt that's the case. TL;DR == As an aspiring game graphics programmer, is it in my best interest to learn 3D programming through fixed-functionality or modern shader-based programming?

    Read the article

  • Space-efficient data structures for broad-phase collision detection

    - by Marian Ivanov
    As far as I know, these are three types of data structures that can be used for collision detection broadphase: Unsorted arrays: Check every object againist every object - O(n^2) time; O(log n) space. It's so slow, it's useless if n isn't really small. for (i=1;i<objects;i++){ for(j=0;j<i;j++) narrowPhase(i,j); }; Sorted arrays: Sort the objects, so that you get O(n^(2-1/k)) for k dimensions O(n^1.5) for 2d and O(n^1.67) for 3d and O(n) space. Assuming the space is 2D and sortedArray is sorted so that if the object begins in sortedArray[i] and another object ends at sortedArray[i-1]; they don't collide Heaps of stacks: Divide the objects between a heap of stacks, so that you only have to check the bucket, its children and its parents - O(n log n) time, but O(n^2) space. This is probably the most frequently used approach. Is there a way of having O(n log n) time with less space? When is it more efficient to use sorted arrays over heaps and vice versa?

    Read the article

  • IDC's Sally Hudson on Securing Mobile Access

    - by Naresh Persaud
    After the launch of Identity Management 11g R2, Oracle Magazine writer David Baum sat down with Sally Hudson, research director of security products at International Data Corporation (IDC) to get her perspective on securing mobile access.  Below is an excerpt from the interview. The complete article can be found here. "We’re seeing a much more diverse landscape of devices, computing habits, and access methods from outside of the corporate network. This trend necessitates a total security picture with different layers and end-point controls. It used to be just about keeping people out. Now, you have to let people in. Most organizations are looking toward multifaceted authentication—beyond the password—by using biometrics, soft tokens, and so forth to do this securely. Corporate IT strategies have evolved beyond just identity and access management to encompass a layered security approach that extends from the end point to the data center. It involves multiple technologies and touch points and coordination, with different layers of security from the internals of the database to the edge of the network." ( Sally Hudson, Oracle Mag Sept/Oct 2012) As the landscape changes you can find out how to adapt by reading Oracle's strategy paper on providing identity services at Internet scale. 

    Read the article

  • JAX-WS SOAP over JMS by Edwin Biemond

    - by JuergenKress
    With WebLogic 12.1.2 Oracle now also supports JAX-WS SOAP over JMS. Before 12.1.2 we had to use JAX-RPC and without any JDeveloper support. We need to use ANT to generate all the web service code. See this blogpost for all the details. In this blogpost I will show you all the necessary JDeveloper steps to create a SOAP over JMS JAX-WS Web Service (Bottom up approach) and generate a Web Service Proxy client to invoke this service, plus let you know what works and what not. We start with a simple HelloService class with a sayHello method. Read the full article here. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Mix Forum Technorati Tags: Edwin Biemond,SOAP,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • Sharing swap space between Windows and Ubuntu

    - by Leftium
    This Linux Swap Space Mini-HOWTO describes how to share swap space between Windows and Linux. Do these instructions still apply to Ubuntu in 2011? How should I modify the steps for Ubuntu? Is there a better approach to sharing swap space? Based on the HOWTO, it seems best to create a dedicated NTFS swap partition: Dedicated so the swap file will be contiguous and remain unfragmented. NTFS so both Windows and Ubuntu can read/write to it. (Or is FAT32 better for this purpose?) Then, configure Ubuntu to prepare the swap space for use by Linux on start up; by Windows on shut down. I want to dual boot Ubuntu and Windows 7 on my X301 laptop. However, my laptop only has a 64 GB SDD, so I would like to conserve as much disk space as possible. update: There is an alternate method using a special driver for Windows that let you use a Linux swap partition for temporary storage like a RAM-disk, but it doesn't seem to be as good...

    Read the article

  • GWB | Comment Spam On The Rise

    - by Geekswithblogs Administrator
    I don’t know a member on Geekswithblogs.net that is not frustrated with the amount of spam they get. It is a major problem that we have been dealing with for 6+ years and trying to come up with new ways to fight. As spammers get smarter, we have to continue to upgrade the tools we use to combat it. Just like any spam filter, sometimes good comments will get caught up. This has been a huge concern for some bloggers causing us to tame what we call spam and not spam. So this post is here just to state we know the spam problem is like a wave, sometimes it is not so bad, other times it gets worse. Right now it is worse. One measure we will take is a requirement for CAPTCHA soon if it continues since most members don’t clean up their spam via the admin tools (which are not the best tools, I know). Also I want to solicit a better approach from the members, what would you like the spam interface on GWB to be like? Be realistic cause we all want “Zero Spam, Good Comment live”. Related Tags: Geekswithblogs.net, Spam

    Read the article

  • Collision Detection for a 2D RPG

    - by PHMitrious
    First of all, I have done some research on this topic before asking, and I'm asking this question as a mean to get some opinions on this topic, so I don't make a decision only on my own, but taking into account other people's experience as well. I'm starting a 2D online RPG project. I am using SFML for graphics and input and I'm creating a basic game structure and all for the game, creating modules for each part of the game. Well, let me get to the point I just wanted to give you guys some context. I want to decide on how I'm going to work with collision detection. Well I'm kinda going to work on maps with a tile map divided in layers (as usual) and add an extra 2 layers - not exactly in the map - for objects. So I'll have collisions between objects and agents (players - npcs - monsters - spells etc) and agents and tiles. The seconds one can be easily solved the first one need a little bit of work. I considered both creating a basic collision test engine using polygons and a quadtree to diminish tests since I'm going to be working with big maps with lots of objects - creating both a physical and graphical world representation. And I also considered using a physics engine like Box2D for collision tests. I think the first approach would take more work on my part but the second one would have the overhead of using a whole physics engine for just collision detection and no physics. What do you guys think ?

    Read the article

  • Usage of repository between EF model and code consumer

    - by jim
    I have binary data in my database that I'll have to convert to bitmap at some point. I was thinking whether or not it's appropriate to use a repository and do it there. My consumer, which is a presentation layer, will use this repository. For example: // This is a class I created for modeling the item as is. public class RealItem { public string Name { get; set; } public Bitmap Image { get; set; } } public abstract class BaseRepository { //using Unity (http://unity.codeplex.com) to inject the dependancy of entity context. [Dependency] public Context { get; set; } } public calss ItemRepository : BaseRepository { public List<Items> Select() { IEnumerable<Items> items = from item in Context.Items select item; List<RealItem> lst = new List<RealItem>(); foreach(itm in items) { MemoryStream stream = new MemoryStream(itm.Image); Bitmap image = (Bitmap)Image.FromStream(stream); RealItem ritem = new RealItem{ Name=item.Name, Image=image }; lst.Add(ritem); } return lst; } } Is this a correct way to use the repository pattern? I'm learning this pattern and I've seen a lot of examples online that are using a repository but when I looked at their source code... for example: public IQueryable<object> Select { return from q in base.Context select q; } as you can see no behavior is added to the system by their approach, so I was confused that maybe repository is something else and I got it all wrong. At the end there should be extra benifits of using them right?

    Read the article

< Previous Page | 211 212 213 214 215 216 217 218 219 220 221 222  | Next Page >