Search Results

Search found 9091 results on 364 pages for 'thinking'.

Page 147/364 | < Previous Page | 143 144 145 146 147 148 149 150 151 152 153 154  | Next Page >

  • links for 2010-04-29

    - by Bob Rhubart
    AS11 Oracle B2B Sync Support - Series 1 (Oracle Fusion Middleware - B2B Team Blog) Sinkarbabu Kirubanithi with part 1 of a planned 3-part series on synchronous message support in Oracle B2B 11g. (tags: oracle otn fusionmiddleware b2b) Java 2 Go!: How to write a simple yet “bullet-proof” object cache "So, while we were thinking hard to come up with the most efficient, generic and elegant way of finally implementing our weak and soft caches, Mr. Eric Chan, who is one of the main architects in Oracle Beehive team, had a very interesting breakthrough. In short terms, he thought of a very nice way of combining both WeakReference and SoftReference in our weak and soft caches so that they would provide exactly the same functionality without having to deal with those reference queues at all. Basically, instead of using a plain HashMap as our backing storage, we used a java.util.WeakHashMap in both our cache implementations. The hat trick was what and how to store things in it." - Eduardo Rodrigues (tags: oracle java sun) @jamet123: First Look – Oracle Data Mining "[Oracle Data Mining] is a nice product for Oracle database customers and well worth looking into. The new UI will only make it more so." James Taylor (tags: oracle otn datamining database) Live Webcast: Social BPM: Integrating Enterprise 2.0 with Business Applications #oracle Peggy Chen and Dan Tortorici show you how to take your business to the next level with a unified solution that fosters process-based collaboration between employees, partners, and customers. Wednesday, May 12, 2010 at 11:00am PT / 2:00pm ET (tags: oracle otn enterprise2.0 webcast)

    Read the article

  • Manage ClickOnce releases for different parties

    - by Dirk Beckmann
    I'm struggling with release management of a piece of software. First some general information: It is a ClickOnce application I follow the release often practice There are about 30 parties served with this software I need full control which update will be delivered to which party Not each party is allowed to get the latest update/release Each party has multiple clients that are all allowed to get the latest update, served for the specific party So that's what my requirements are in a rough description. So let me explain what I was thinking about how to solve this. I would like to create a "deployment" website (asp.net) that will handle all the requests There are two endpoints one for download the client and one where the client checks for updates So each party has a separate endpoint like DeploymentSite/party1 and another for DeploymentSite/party2 The Application Files should still be stored centralized So I thought it would be manageable with mage.exe with the following steps Build application and store new release into Application Files Repository/Folder Get parties that should be updated (config file, database what ever) Run mage.exe to create a new application and deployment manifest for each party in the update list with new Application File Location (1.0.2) Actually I'm really struggling with this mage.exe staff. I can't create the appropriate files with the needed codebase. How to handle thes requirements?

    Read the article

  • What is the best way to evaluate new programmers?

    - by Rafael
    What is the best way to evaluate the best candidates to get a new job (talking merely in terms of programming skills)? In my company we have had a lot of bad experiences with people who have good grades but do not have real programming skills. Their skills are merely like code monkeys, without the ability to analyze the problems and find solutions. More things that I have to note: The education system in my country sucks--really sucks. The people that are good in this kind of job are good because they have talent for it or really try to learn on their own. The university / graduate /post-grad degree doesn't mean necessarily that you know exactly how to do the things. Certifications also mean nothing here because the people in charge of the certification course also don't have skills (or are in low paying jobs). We need really to get the good candidates that are flexible and don't have mechanical thinking (because this type of people by experience have a low performance). We are in a government institution and the people that are candidates don't necessarily come from outside, but we have the possibility to accept or not any candidates until we find the correct one. I hope I'm not sounding too aggressive in my question; and BTW I'm a programmer myself. edit: I figured out that asked something really complex here. I will un-toggle "the correct answer" only to let the discussion going fluent, without any bias.

    Read the article

  • Microsoft&rsquo;s new technical computing initiative

    - by Randy Walker
    I made a mental note from earlier in the year.  Microsoft literally buys computers by the truckload.  From what I understand, it’s a typical practice amongst large software vendors.  You plug a few wires in, you test it, and you instantly have mega tera tera flops (don’t hold me to that number).  Microsoft has been trying to plug away at their cloud services (named Azure).  Which, for the layman, means Microsoft runs your software on their computers, and as demand increases you can allocate more computing power on the fly. With this in mind, it doesn’t surprise me that I was recently sent an executive email concerning Microsoft’s new technical computing initiative.  I find it to be a great marketing idea with actual substance behind their real work.  From the programmer academic perspective, in college we dreamed about this type of processing power.  This has decades of computer science theory behind it. A copy of the email received.  (note that I almost deleted this email, thinking it was spam due to it’s length) We don't often think about how complex life really is. Take the relatively simple task of commuting to and from work: it is, in fact, a complicated interplay of variables such as weather, train delays, accidents, traffic patterns, road construction, etc. You can however, take steps to shorten your commute - using a good, predictive understanding of a few of these variables. In fact, you probably are already taking these inputs and instinctively building a predictive model that you act on daily to get to your destination more quickly. Now, when we apply the same method to very complex tasks, this modeling approach becomes much more challenging. Recent world events clearly demonstrated our inability to process vast amounts of information and variables that would have helped to more accurately predict the behavior of global financial markets or the occurrence and impact of a volcano eruption in Iceland. To make sense of issues like these, researchers, engineers and analysts create computer models of the almost infinite number of possible interactions in complex systems. But, they need increasingly more sophisticated computer models to better understand how the world behaves and to make fact-based predictions about the future. And, to do this, it requires a tremendous amount of computing power to process and examine the massive data deluge from cameras, digital sensors and precision instruments of all kinds. This is the key to creating more accurate and realistic models that expose the hidden meaning of data, which gives us the kind of insight we need to solve a myriad of challenges. We have made great strides in our ability to build these kinds of computer models, and yet they are still too difficult, expensive and time consuming to manage. Today, even the most complicated data-rich simulations cannot fully capture all of the intricacies and dependencies of the systems they are trying to model. That is why, across the scientific and engineering world, it is so hard to say with any certainty when or where the next volcano will erupt and what flight patterns it might affect, or to more accurately predict something like a global flu pandemic. So far, we just cannot collect, correlate and compute enough data to create an accurate forecast of the real world. But this is about to change. Innovations in technology are transforming our ability to measure, monitor and model how the world behaves. The implication for scientific research is profound, and it will transform the way we tackle global challenges like health care and climate change. It will also have a huge impact on engineering and business, delivering breakthroughs that could lead to the creation of new products, new businesses and even new industries. Because you are a subscriber to executive e-mails from Microsoft, I want you to be the first to know about a new effort focused specifically on empowering millions of the world's smartest problem solvers. Today, I am happy to introduce Microsoft's Technical Computing initiative. Our goal is to unleash the power of pervasive, accurate, real-time modeling to help people and organizations achieve their objectives and realize their potential. We are bringing together some of the brightest minds in the technical computing community across industry, academia and science at www.modelingtheworld.com to discuss trends, challenges and shared opportunities. New advances provide the foundation for tools and applications that will make technical computing more affordable and accessible where mathematical and computational principles are applied to solve practical problems. One day soon, complicated tasks like building a sophisticated computer model that would typically take a team of advanced software programmers months to build and days to run, will be accomplished in a single afternoon by a scientist, engineer or analyst working at the PC on their desktop. And as technology continues to advance, these models will become more complete and accurate in the way they represent the world. This will speed our ability to test new ideas, improve processes and advance our understanding of systems. Our technical computing initiative reflects the best of Microsoft's heritage. Ever since Bill Gates articulated the then far-fetched vision of "a computer on every desktop" in the early 1980's, Microsoft has been at the forefront of expanding the power and reach of computing to benefit the world. As someone who worked closely with Bill for many years at Microsoft, I am happy to share with you that the passion behind that vision is fully alive at Microsoft and is carried out in the creation of our new Technical Computing group. Enabling more people to make better predictions We have seen the impact of making greater computing power more available firsthand through our investments in high performance computing (HPC) over the past five years. Scientists, engineers and analysts in organizations of all sizes and sectors are finding that using distributed computational power creates societal impact, fuels scientific breakthroughs and delivers competitive advantages. For example, we have seen remarkable results from some of our current customers: Malaria strikes 300,000 to 500,000 people around the world each year. To help in the effort to eradicate malaria worldwide, scientists at Intellectual Ventures use software that simulates how the disease spreads and would respond to prevention and control methods, such as vaccines and the use of bed nets. Technical computing allows researchers to model more detailed parameters for more accurate results and receive those results in less than an hour, rather than waiting a full day. Aerospace engineering firm, a.i. solutions, Inc., needed a more powerful computing platform to keep up with the increasingly complex computational needs of its customers: NASA, the Department of Defense and other government agencies planning space flights. To meet that need, it adopted technical computing. Now, a.i. solutions can produce detailed predictions and analysis of the flight dynamics of a given spacecraft, from optimal launch times and orbit determination to attitude control and navigation, up to eight times faster. This enables them to avoid mistakes in any areas that can cause a space mission to fail and potentially result in the loss of life and millions of dollars. Western & Southern Financial Group faced the challenge of running ever larger and more complex actuarial models as its number of policyholders and products grew and regulatory requirements changed. The company chose an actuarial solution that runs on technical computing technology. The solution is easy for the company's IT staff to manage and adjust to meet business needs. The new solution helps the company reduce modeling time by up to 99 percent - letting the team fine-tune its models for more accurate product pricing and financial projections. Our Technical Computing direction Collaborating closely with partners across industry and academia, we must now extend the reach of technical computing even further to help predictive modelers and data explorers make faster, more accurate predictions. As we build the Technical Computing initiative, we will invest in three core areas: Technical computing to the cloud: Microsoft will play a leading role in bringing technical computing power to scientists, engineers and analysts through the cloud. Existing high- performance computing users will benefit from the ability to augment their on-premises systems with cloud resources that enable 'just-in-time' processing. This platform will help ensure processing resources are available whenever they are needed-reliably, consistently and quickly. Simplify parallel development: Today, computers are shipping with more processing power than ever, including multiple cores, but most modern software only uses a small amount of the available processing power. Parallel programs are extremely difficult to write, test and trouble shoot. However, a consistent model for parallel programming can help more developers unlock the tremendous power in today's modern computers and enable a new generation of technical computing. We are delivering new tools to automate and simplify writing software through parallel processing from the desktop... to the cluster... to the cloud. Develop powerful new technical computing tools and applications: We know scientists, engineers and analysts are pushing common tools (i.e., spreadsheets and databases) to the limits with complex, data-intensive models. They need easy access to more computing power and simplified tools to increase the speed of their work. We are building a platform to do this. Our development efforts will yield new, easy-to-use tools and applications that automate data acquisition, modeling, simulation, visualization, workflow and collaboration. This will allow them to spend more time on their work and less time wrestling with complicated technology. Thinking bigger There is so much left to be discovered and so many questions yet to be answered in the fascinating world around us. We believe the technical computing community will show us that we have not seen anything yet. Imagine just some of the breakthroughs this community could make possible: Better predictions to help improve the understanding of pandemics, contagion and global health trends. Climate change models that predict environmental, economic and human impact, accessible in real-time during key discussions and debates. More accurate prediction of natural disasters and their impact to develop more effective emergency response plans. With an ambitious charter in hand, this new team is ready to build on our progress to-date and execute Microsoft's technical computing vision over the months and years ahead. We will steadily invest in the right technologies, tools and talent, and work to bring together the technical computing community. I invite you to visit www.modelingtheworld.com today. We welcome your ideas and feedback. I look forward to making this journey with you and others who want to answer the world's biggest questions, discover solutions to problems that seem impossible and uncover a host of new opportunities to change the world we live in for the better. Bob

    Read the article

  • Learning advanced java skills

    - by moe
    I've been programming in java for a while and I really like the language, I've mostly just done game programming, but I want to get a feel for some of the more commonly used api's and frameworks and just get a generally more well-rounded grasp of the language and the common libraries in the current job market. From what I found things like spring, hibernate, and GWT are pretty in demand right now. I looked at some tutorials online and they weren't hard to follow but I really felt like I had no context for what I was learning - I had no idea how any of it would be use in a real work environment. I know nothing can rival the benefit I'd get from actual work experience but that's not an option for me right now, I need another way to learn these technologies in a way where I'll at least feel comfortable working with them and know what I'm doing beyond just understanding what code does what. I checked out a few books but they were all really old(like pre-2006, am I right to assume those books would be kind of out of date today?) or required experience with libraries that I didn't have and can't get. I hate getting stuck looking for the best resource to learn something instead of spending my time actually learning. All I really want is someone to point me to a resource(website or ebook) that is aimed at already experienced java developers and will not only teach me some interesting useful java technology(anything that is useful, I dont know much outside of graphics libraries and game related things so I was thinking some database or web programming api's) but also give me a good perspective of it and leave me feeling confident that I could actually use what I learned on a practical application. If my post makes you think I'm not yet experienced to be learning these things, which I doubted earlier today but am now starting to question, then what do you think is the next step for me? I just want to get better at java. Thanks everyone

    Read the article

  • How should I start with Lisp?

    - by Gary Rowe
    I've been programming for years now, working my way through various iterations of Blub (BASIC, Assembler, C, C++, Visual Basic, Java, Ruby in no particular order of "Blub-ness") and I'd like to learn Lisp. However, I have a lot of intertia what with limited time (family, full time job etc) and a comfortable happiness with my current Blub (Java). So my question is this, given that I'm someone who would really like to learn Lisp, what would be the initial steps to get a good result that demonstrates the superiority of Lisp in web development? Maybe I'm missing the point, but that's how I would initially see the application of my Lisp knowledge. I'm thinking "use dialect A, use IDE B, follow instructions on page C, question your sanity after monads using counsellor D". I'd just like to know what people here consider to be an optimal set of values for A, B, C and perhaps D. Also some discussion on the relative merit of learning such a powerful language as opposed to, say, becoming a Rails expert. Just to add some more detail, I'll be developing on MacOS (or a Linux VM) - no Windows based approaches will be necessary, thanks. Notes for those just browsing by I'm going to keep this question open for a while so that I can offer feedback on the suggestions after I've been able to explore them. If you happen to be browsing by and feel you have something to add, please do. I would really welcome your feedback. Interesting links Assuming you're coming at Lisp from a Java background, this set of links will get you started quickly. Using Intellij's La Clojure plugin to integrate Lisp (videocast) Lisp for the Web Online version of Practical Common Lisp (c/o Frank Shearar) Land of Lisp a (+ (+ very quirky) game based) way in but makes it all so straightforward

    Read the article

  • An XML file or Database?

    - by webnoob
    I am re-writing a section of my site and am trying to decide how much of a rewrite this will be. At the moment I have a web service feed that generates an xml once per day. I then use this xml file on my website to generate the general structure. I am trying to decide if this information should be located in the database or stay in the xml file. The file can range from 4mb - 12mb. The files depth can go on and on so I have to recurse to find the data I want. I use the .NET serializer classes and store the serialized file in a global variable to avoid re-serializing it each time the page is loaded. My reasons for thinking a database would be better are: I would know exactly where I am in the file by using an internal ID so I wouldn't have to recurse the file to get information. I wouldn't have to load / serialize the XML and could just use my already open database connections. Searching for the data in the file would be quicker(?) as I would just perform an SQL query rather than re-cursing the file. Has anyone got any ideas which is better and which option uses more resources on the server or be quicker? EDIT: The file is read every time the web page is loaded (although only serialized once). It isn't written to by standard users (only by an admin task that runs in the middle of the night). This is my initial investigation before mocking up.

    Read the article

  • Design pattern for procedural terrain assets

    - by Alex
    I'm developing a procedural terrain class at the moment and am stuck on the correct design pattern. The terrain is 2D and is constructed from a series of (x,y) points. I currently have a method that just randomly adds points to an array of points to generate a random spread of points. However I need a more elaborate system for generating the terrain. The terrain will be built form a series of re-accuring terrain structures eg. a pit, jump, hill etc. Each structure will have some randomness assigned to it, each height of hill will be random, pit size will be random etc. Each terrain structure will have: A property detailing the number of points making up that structure A method for generating the points (not absolutely necessary) My current thinking is to have a class for each terrain structure, create a fixed amount of terrain elements ahead of the player, loop over these and add the corresponding points to the game. What is the best way to create these procedural terrain structures when they are ultimately just a set of functions for generating terrain elements? Is a class for each terrain element excessive? I'm developing the game for iphone so any objective-c related answers would be welcome.

    Read the article

  • How to Make Sure Your Company Doesn't Go Underwater If Your Programmers Are Hit by a Bus

    - by Graviton
    I have a few programmers under me, they are all doing very great and very smart obviously. Thank you very much. But the problem is that each and every one of them is responsible for one core area, which no one else on the team have foggiest idea on what it is. This means that if anyone of them is taken out, my company as a business is dead because they aren't replaceable. I'm thinking about bringing in new programmers to cover them, just in case they are hit by a bus, or resign or whatever. But I afraid that The old programmers might actively resist the idea of knowledge transfer, fearing that a backup might reduce their value. I don't have a system to facilitate technology transfer between different developers, so even if I ask them to do it, I've no assurance that they will do it properly. My question is, How to put it to the old programmers in such they would agree What are systems that you use, in order to facilitate this kind of "backup"? I can understand that you can do code review, but is there a simple way to conduct this? I think we are not ready for a full blown, check-in by check-in code review.

    Read the article

  • Enterprise Architecture IS (should not be) Arbitrary

    - by pat.shepherd
    I took a look at a blog entry today by Jordan Braunstein where he comments on another blog entry titled “Yes, “Enterprise Architecture is Relative BUT it is not Arbitrary.”  The blog makes some good points such as the following: Lock 10 architects in 10 separate rooms; provide them all an identical copy of the same business, technical, process, and system requirements; have them design an architecture under the same rules and perspectives; and I guarantee your result will be 10 different architectures of varying degrees. SOA Today: Enterprise Architecture IS Arbitrary Agreed, …to a degree….but less so if all 10 truly followed one of the widely accepted EA frameworks. My thinking is that EA frameworks all focus on getting the business goals/vision locked down first as the primary drivers for decisions made lower down the architecture stack.  Many people I talk to, know about frameworks such as TOGAF, FEA, etc. but seldom apply the tenants to the architecture at hand.  We all seem to want to get right into the Visio diagrams and boxes and arrows and connecting protocols and implementation details and lions and tigers and bears (Oh, my!) too early. If done properly the Business, Application and Information architectures are nailed down BEFORE any technological direction (SOA or otherwise) is set.  Those 3 layers and Governance (people and processes), IMHO, are layers that should not vary much as they have everything to do with understanding the business -- from which technological conclusions can later be drawn. I really like what he went on to say later in the post about the fact that architecture attempts to remove the amount of variance between the 10 different architect’s work.  That is the real heart of what EA is about; REMOVING THE ARBRITRARITY.

    Read the article

  • links for 2010-05-10

    - by Bob Rhubart
    Announcing the MOS WCI "Community" (World of WebCenter Interaction) In this community you'll find a product related discussion forum moderated by Oracle WebCenter Interaction support engineers, recommended tips and tricks, links to knowledge base articles and best practices for setting up and administering up your environment. We hope you'll take a minute to have a look through the community. (tags: oracle otn webcenter enterprise2.0) Jason Williamson: Tuxedo Runtime for CICS and Batch Webcast "The notion that mainframes can be rehosted on open system is pretty well accepted. There are still some hold out CxO's who don't believe it, but those guys typically are not really looking to migrate anyway and don't take an honest look at the case studies, history and TPC reports." Jason Williamson (tags: oracle otn entarch tuxedo) Tom Hofte: Analyzing Out-Of-Memory issues in WebLogic 10.3.3 with JRockit 4.0 Flight Recorder Tom Hofte shows you "how to capture automatically an overall WLS system image, including a JFR image, after an out-of-memory (OOM) exception has occured in the JVM hosting WLS 10.3.3." (tags: oracle otn weblogic soa java) Install Control Center Agent on Oracle Application Server (Oracle Warehouse Builder (OWB) Weblog) Qianqian Wu show you how to Install and Configure the Application Server; Deploy the Control Center Agent to the Application Server; Optional Configuration Tasks (tags: oracle otn bi datawarehousing) Frank Buytendijk: BI and EPM Landscape "Organizations are getting more serious about ecosystem thinking. They do not evaluate single tools anymore for different application areas, but buy into a complete ecosystem of hardware, software and services. The best ecosystem is the one that offers the most options, in environments where the uncertainty is high and investments are hard to reverse. The key to successfully managing such an environment is middleware, and BI and EPM become increasingly middleware intensive. In fact, given the horizontal nature of BI and EPM, sitting on top of all business functions and applications, you could call them 'upperware.'" -- Frank Buytendijk (tags: oracle otn enterprisearchitecture bi)

    Read the article

  • An introductory presentation about testing with MSTest, Visual Studio, and Team Foundation Server 2010

    - by Thomas Weller
    While it was very quiet here on my blog during the last months, this was not at all true for the rest of my professional life. The simple story is that I was too busy to find the time for authoring blog posts (and you might see from my previous ones that they’re usually not of the ‘Hey, I’m currently reading X’ or ‘I’m currently thinking about Y’ kind…). Anyway. Among the things I did during the last months were setting up a TFS environment (2010) and introducing a development team to the MSTest framework (aka. Visual Studio Unit Testing), some additional tools (e.g. Moq, Moles, White),  how this is supported in Visual Studio, and how it integrates into the broader context of the then new TFS environment. After wiping out all the stuff which was directly related to my former customer and reviewing/extending the Speaker notes, I thought I share this presentation (via Slideshare) with the rest of the world. Hopefully it can be useful to someone else out there… Introduction to testing with MSTest, Visual Studio, and Team Foundation Server 2010 View more presentations from Thomas Weller. Be sure to also check out the slide notes (either by viewing the presentation directly on Slideshare or - even better - by downloading it). They contain quite some additional information, hints, and (in my opinion) best practices.

    Read the article

  • How to learn & introduce scrum in small startup?

    - by Jens Bannmann
    In a few months, a friend will establish his startup software company, and I will be the software architect with one additional developer. Though we have no real day-to-day experience with agile methods, I have read much "overview" type of material on them, and I firmly believe they are a good - if not the only - way to build software. So with this company, I want to go for iterative, agile development from day 1, preferably something light-weight. I was thinking of Scrum, but the question is: what is the best way for me and my colleagues to learn about it, to introduce it (which techniques when etc) and to evaluate whether we should keep it? Background which might be relevant: we're all experienced developers around the same age with similar professional mindset. We have worked together in the past and afterwards at several different companies, mostly with a Java/.NET focus. Some are a bit familiar with general ideas from the agile movement. In this startup, I have great power over tools, methods and process. The startup's product will be developed from scratch and could be classified as middleware. We have some "customer" contacts in the industry who could provide input as soon as we get to an alpha stage.

    Read the article

  • How to obtain flow while pair programming in agile development?

    - by bizso09
    Flow is is concept introduced by Mihaly Csikszentmihalyi In short, it means what most to get into the "zone". You feel immeresed in the task you are doing, you are in deep focus and concentration and the task difficulty is just right for you, but challenging at the same time. When people acquire flow their prodctivity shoots up. Programming requires great deal of mental focus and programmers need to juggle several things in their mind at once. Many like to work in a quite environment where they can direct their full attention to the task. If they are interreupted, it may take several minutes, sometimes hours to get back into flow. I understand that agile way of doing software development is called pair prograaming. This is pormoted in Extreme programming too. It means you put the whole software development team in one room so that communication is seamless. You do programming with your pair because this way you get instant code reviews and fix bugs sooner. However, I alwys had problem obtaining flow while doing pair programming because of the contant stream of interrupts. I'm thinking deep about an issue then all of sudden someone asks me a question from another pair. My train of thought is all lost. How can you obtain and keep flow while doing agile pair programming?

    Read the article

  • How to deal with static utility classes when designing for testability

    - by Benedikt
    We are trying to design our system to be testable and in most parts developed using TDD. Currently we are trying to solve the following problem: In various places it is necessary for us to use static helper methods like ImageIO and URLEncoder (both standard Java API) and various other libraries that consist mostly of static methods (like the Apache Commons libraries). But it is extremely difficult to test those methods that use such static helper classes. I have several ideas for solving this problem: Use a mock framework that can mock static classes (like PowerMock). This may be the simplest solution but somehow feels like giving up. Create instantiable wrapper classes around all those static utilities so they can be injected into the classes that use them. This sounds like a relatively clean solution but I fear we'll end up creating an awful lot of those wrapper classes. Extract every call to these static helper classes into a function that can be overridden and test a subclass of the class I actually want to test. But I keep thinking that this just has to be a problem that many people have to face when doing TDD - so there must already be solutions for this problem. What is the best strategy to keep classes that use these static helpers testable?

    Read the article

  • "Integratable" but not "integrated" GPL

    - by mgibsonbr
    There has been much debate over whether or not merely linking to a piece of code makes it a derivative work. I know FSF says "yes", so according to them I can't dynamically link a non-GPL compatible program to a GPL library and distribute the whole. But I could do that for private use, as long as no code is released to the public. That made me wonder: what if I don't redistribute the GPL code at all? If my program can work alone (reinforcing my claim that it's not a derivative work), but can do more if the GPL library is also installed to the system, couldn't I just release my application under my own licensing terms - without including any GPL code - and post instructions for anyone interested to separately download the GPL code and do the integration "for their private use"? I know it's against the "spirit" of the GPL, so I'm not suggesting it's a good idea to do that. However, this question is bugging me for some time, specially because of the implications of each answer: If I can not do that: can I write another library with a similar API? (before answering "of course you can", remember that having the same API would allow both libraries to be swapped at will by my customers - so I don't need to work too hard on my library or even make it "working". How to determine if a similar program is just similar or is a circumvention attempt?) If I can do that: can I also be paid to perform the service of installing the GPL library for a customer? (I sell them my program, install it in their machines, download and install the GPL library too) can I put the two programs in the same website? In two different CDs? (I know I said the idea was not to redistribute the GPL code, I'm just thinking in excuses people could use to claim they're not redistributing even though they are)

    Read the article

  • release management system - architectural question

    - by Sonic Soul
    Every place i've worked created their own release process, and all of them worked pretty well, however it took pretty good effort (and often a dedicated team) to manage releases. I am currently at a new place, and about to design such a system, however this time the team is very lean and we won't have dedicated resources to releasing. It will be up to development manager until the system is proofed enough for other developers to use. we're using Subversion as code repository, Team City as the build server, Jira issue tracker, Oracle db. I was thinking about writing a basic workflow app, that will let developers create a new manifest which will specify the following items. release details (who, jira issues etc) workflow step (dev, test, uat, prod approved, prod released) source files that last item is where it can get hairy. especially with database scripts. Figured I'd ask if there is a good pattern, or off the shelf product that could help with the database part, or perhaps the whole process. I briefly tested Red Gate Oracle deployment tool, but it didn't work out as well as I had hoped (from 1 day of testing at least) Questions: I think I could get around releasing of our code with something like Octopus Deploy straight from Team City. I am not clear however, how I could create a simple database deployment part, that will track which version of which script (from subversion) has been deployed where. Is there already some utility I could utilize for navigating subversion to choose which scripts should be released, instead of writing one from scratch. I'd just need it to produce some manifest of paths + versions.

    Read the article

  • Managing user privileges, best practice.

    - by Loïc N.
    I'm am new to web development. I'm creating a website where different user can have different privileges, such as creating/editing/deleting a news, or adding/editing/deleting whatever kind of content on the website. I started by creating a "user type" that would indicate the user's privileges (such as "user", "newser", "moderator", "admin", and so on), but i quickly started noticing issues that made me think that this might be a naive approach to this issue. What if i want to give a regular user the right to edit a news (for whatever reason)? Then the user would be half "user", half "newser". But the system i use can only handle one user-type. So what would be the best practice here? I was thinking of removing the concept of roles (or "user-types" such as newser) and only have the concept of "privilege", where every user could have zero to many privileges. So, to re-use the above example, if i wanted a user to have the right to edit some news, i would only have to give him a "edit news" privilege. Is this the way to go?

    Read the article

  • 2D Side Scrolling game and "walk over ground" collision detection

    - by Fire-Dragon-DoL
    The question is not hard, I'm writing a game engine for 2D side scrolling games, however I'm thinking to my 2D side scrolling game and I always come up with the problem of "how should I do collision with the ground". I think I couldn't handle the collision with ground (ground for me is "where the player walk", so something heavily used) in a per-pixel way, and I can't even do it with simple shape comparison (because the ground can be tilted), so what's the correct way? I'know what tiles are and i've read about it, but how much should be big each tile to not appear like a stairs?Are there any other approach? I watched this game and is very nice how he walks on ground: http://www.youtube.com/watch?v=DmSAQwbbig8&feature=player_embedded If there are "platforms" in mid air, how should I handle them?I can walk over them but I can't pass "inside". Imagine a platform in mid air, it allows you to walk over it but limit you because you can't jump in the area she fits Sorry for my english, it's not my native language and this topic has a lot of keywords I don't know so I have to use workarounds Thanks for any answer Additional informations and suggestions: I'm doing a game course in this period and I asked them how to do this, they suggested me this approach (a QuadTree): -All map is divided into "big nodes" -Each bigger node has sub nodes, to find where the player is -You can find player's node with a ray on player position -When you find the node where the player is, you can do collision check through all pixels (which can be 100-200px nothing more) Here is an example, however i didn't show very well the bigger nodes because i'm not very good with photoshop :P How is this approach?

    Read the article

  • Is this information about me as a programmer concise and good enough?

    - by Nick Rosencrantz
    I not only want you to review my resume but please tell me what you think Google means when they answered me: "We don't look at personal letters and we like your resume and we can recommend you internally but we need measurable experience. What is meant with "measurable" here? Do they mean like O(1) compared to O(n), selling an entire company, grades or what? This is what I sent: Curriculum vitae Nick Rosencrantz Competence: System development, web development Technical competence: Java, Javascript, HTML, XML, CSS, AJAX, PHP, SQL, Python Employments: 2012- Mobile Innovation AB System Developer IT consultant (Java programmer) 2011-2012 Bnano International Ltd System Developer Python programming in Google App Engine 2008-2009 Sweden Island AB System Developer Programming C++ and Java EE components 2003-2007 Studies Stockholm School of Economics During studies worked as network technician at Effnet AB 2000-2002 Jadestone AB System Developer System development in Java/J2EE. In 2001: KTH, Assistant. Teaching application server programming in Java Enterprise + weblogic + Informix. 1999-2000 Studies KTH 1996-1998 Spray.se System development, Researcher 1995-1995 Finance broker Backoffice work with financial instruments 1993-1994 Computer & Audio-Technical Systems AB Programming, sommer job Education/Courses: Stockholm School of Economics, Master of Science diploma, KTH, Computer Science undergraduate studies Languages Swedish, English, also some German and French Born 1973, Swedish citizen I also have a project-based CS which is several pages long but the above is about what I was aiming for in the beginning when I was looking for a job, now I have employment as an IT consultant in central Stockholm and I want to make my resume concise and also know what Google meant with their answer (It was a Swedish Google employee that via linkedin recruited from my Stockholm School of Economics groups since that is a small elite economics school where I took my M.Sc. and KTH is one of the largest universities in northern Europe so I sent her a link with my CV and she said she could promote me internally if I added "measurable experience" and I've been thinking for weeks what that may mean?

    Read the article

  • Partnering with your Applications – The Oracle AppAdvantage Story

    - by JuergenKress
    So, what is Oracle AppAdvantage? A practical approach to adopting cloud, mobile, social and other trends A guided path to aligning IT more closely with business objectives Maximizing the value of existing investments in applications A layered approach to simplifying IT, building differentiation and bringing innovation All of the above? Enhance the value of your existing applications investment with #Oracle #AppAdvantage Aligning biz and IT expectations on Simplifying IT, building Differentiation and Innovation #AppAdvantage Adopt a pace layered approach to extracting biz value from your apps with #AppAdvantage Bringing #cloud, #social, #mobile to your apps with #Oracle #AppAdvantage Embracing Situational IT In the next IT Leaders Editorial, Rick Beers discusses the necessity of IT disruption and #AppAdvantage. Rick Beers sheds light on the Situational Leadership and the path to success #AppAdvantage. Rick Beers draws parallels with CIO’s strategic thinking and #Oracle #AppAdvantage approach. Do you have this paper in your summer reading list? Aligning biz and IT #AppAdvantage What does Situational leadership have to do with Oracle AppAdvantage? Catch the next piece in Rick Beers’ monthly series of IT Leaders Editorial and find out. #AppAdvantage Middleware Minutes with Howard Beader – August edition In the quarterly column, @hbeader discusses impact of #cloud, #mobile, #fastdata on #middleware Making #cloud, #mobile, #fastdata a part of your IT strategy with #middleware What keeps the #oracle #middleware team busy? Find out in the inaugural post in quarterly update on #middleware Recent #middleware news update along with a preview of things to come from #Oracle, in @hbeader ‘s quarterly column In his inaugural post, Howard Beader, senior director for Oracle Fusion Middleware, discusses the recent industry trends including mobile, cloud, fast data, integration and how these are shaping the IT and business requirements. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Mix Forum Technorati Tags: AppAdvantage,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • DENY select on sys.dm_db_index_physical_stats

    - by steveh99999
    Technorati Tags: security,DMV,permission,sys.dm_db_index_physical_stats I recently saw an interesting blog article by Paul Randal about the performance overhead of querying the sys.dm_db_index_physical_stats. So I was thinking, would it be possible to let non-sysadmin users query DMVs on a SQL server but stop them querying this I/O intensive DMV ? Yes it is, here’s how… 1. Create a new login for test purposes, with permissions to access AdventureWorks database only … CREATE LOGIN [test] WITH PASSWORD='xxxx', DEFAULT_DATABASE=[AdventureWorks] GO USE [AdventureWorks] GO CREATE USER [test] FOR LOGIN [test] WITH DEFAULT_SCHEMA=[dbo] GO 2.login as user test and issue command SELECT  * FROM sys.dm_db_index_physical_stats(DB_ID('AdventureWorks'),NULL,NULL,NULL,'DETAILED') gets error :-  Msg 297, Level 16, State 12, Line 1 The user does not have permission to perform this action. 3.As a sysadmin, issue command :- USE AdventureWorks GRANT VIEW DATABASE STATE TO [test] or GRANT VIEW SERVER STATE TO [test] if all databases can be queried via DMV. 4. Try again as user test to issue command SELECT * FROM sys.dm_db_index_physical_stats(DB_ID('AdventureWorks '),NULL,NULL,NULL,'DETAILED') -- now produces valid results from the DMV.. 5 now create the test user in master database, public role only USE master CREATE USER [test] FOR LOGIN [test] 6 issue command :- USE master DENY SELECT ON sys.dm_db_index_physical_stats TO [test] 7 Now go back to AdventureWorks using test login and try SELECT * FROM sys.dm_db_index_physical_stats(DB_ID('AdventureWorks’),NULL,NULL,NULL,’DETAILED') Now gets error... Msg 229, Level 14, State 5, Line 1 The SELECT permission was denied on the object 'dm_db_index_physical_stats', database 'mssqlsystemresource', schema 'sys'. but the user is still able to query all other non-IO-intensive DMVs. If the user attempts to view the index physical stats via a builtin management studio report  – see recent blog post by Pinal Dave they get an error also

    Read the article

  • Friday Fun: Christmas Tree Light Up

    - by Asian Angel
    Another week has thankfully passed by, so it is time to take a break and have some fun. This week’s game tests your ability to light up the whole Christmas tree…can you figure out the correct wiring configuration? Christmas Tree Light Up The object of the game is simple…light up all of the bulbs on the Christmas tree. While the game may look quick and easy at first you will need to do some thinking and experimenting to come up with the correct wiring configuration. The instructions are very simple…just click on any of the wiring sections or bulbs to rotate them. Keep in mind that you may have to click a few times to line the wiring sections or bulbs up as desired since the rotation is always clockwise. Note: You will need use all of the wiring sections available to completely light the tree up. Each time you will be presented with a different starting setup coming from your power source. Time to hook up the lights! Note: It is recommended that you disable the sound for the game since the “rotation” sounds can be slightly irritating. A nice start but there are still a lot of bulbs to light up. Getting closer… Almost there…only two more bulbs to light up. Success! Have fun playing! Play Christmas Tree Light Up Latest Features How-To Geek ETC The 50 Best Registry Hacks that Make Windows Better The How-To Geek Holiday Gift Guide (Geeky Stuff We Like) LCD? LED? Plasma? The How-To Geek Guide to HDTV Technology The How-To Geek Guide to Learning Photoshop, Part 8: Filters Improve Digital Photography by Calibrating Your Monitor Our Favorite Tech: What We’re Thankful For at How-To Geek Settle into Orbit with the Voyage Theme for Chrome and Iron Awesome Safari Compass Icons Set Escape from the Exploding Planet Wallpaper Move Your Tumblr Blog to WordPress Pytask is an Easy to Use To-Do List Manager for Your Ubuntu System Snowy Christmas House Personas Theme for Firefox

    Read the article

  • How to implement Facebook leaderboard game for mobile?

    - by TrueGrime
    I am in the final stages of developing an indie C# mobile game that I will deploy to iPhones and Androids using Mono. I wish to add push notifications and a Facebook leader board feature into the game, such that the user will be presented with a list of all his/her friends that are playing the game and their weekly scores in the main screen. I have ZERO experience with web development/networking/databases. I recently started researching the field and formed a basic understanding. Facebook has SDKs in PHP, JavaScript, obj-c and java, tho reading the documentation examples still feels cryptic to me since it involves web/server tech. After researching some more, I understood that my options for server side are basically PHP or ASP.Net. It seems that ASP.Net is more favorable in my case since I am already proficient with C# (but there is no ASP.net SDK from Facebook... I am not sure if this implies that I cant interact with Facebook using ASP.Net). On the down side some have mentioned higher costs for ASP.Net, tho I havent looked further into that aspect yet. I also understood that JavaScript is client side technology. I started going through tutorials of ASP.Net, I was thinking that ASP.Net is a purely server side management language, but it started feeling more like WPF as those tutorials started getting into very lengthy discussions about creating website interfaces and styles. I am not interested in that, I just want to have a web server which my app can somehow communicate with and get friends/scores. Am I learning the right technologies for my goal? should I be learning something else? I am posting this in hopes that someone who knows this field well can see through my problem and help guide me, otherwise I could spend months studying something that might not be the right solution to my goal. Thanks.

    Read the article

  • JavaScript objects and Crockford's The Good Parts

    - by Jonathan
    I've been thinking quite a bit about how to do OOP in JS, especially when it comes to encapsulation and inheritance, recently. According to Crockford, classical is harmful because of new(), and both prototypal and classical are limited because their use of constructor.prototype means you can't use closures for encapsulation. Recently, I've considered the following couple of points about encapsulation: Encapsulation kills performance. It makes you add functions to EACH member object rather than to the prototype, because each object's methods have different closures (each object has different private members). Encapsulation forces the ugly "var that = this" workaround, to get private helper functions to have access to the instance they're attached to. Either that or make sure you call them with privateFunction.apply(this) everytime. Are there workarounds for either of two issues I mentioned? if not, do you still consider encapsulation to be worth it? Sidenote: The functional pattern Crockford describes doesn't even let you add public methods that only touch public members, since it completely forgoes the use of new() and constructor.prototype. Wouldn't a hybrid approach where you use classical inheritance and new(), but also call Super.apply(this, arguments) to initialize private members and privileged methods, be superior?

    Read the article

< Previous Page | 143 144 145 146 147 148 149 150 151 152 153 154  | Next Page >