Search Results

Search found 4940 results on 198 pages for 'understanding'.

Page 88/198 | < Previous Page | 84 85 86 87 88 89 90 91 92 93 94 95  | Next Page >

  • Oracle University Seminars April/May/June 2012

    - by A&C Redaktion
    Normal 0 21 false false false DE X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} Oracle University's Expert Seminars are exclusive events delivered by top experts with years of experience in working with Oracle products. The seminars topics cover the breadth of the Oracle portfolio and are aimed at helping you to get the most of your Oracle products and to stay connected with new technologies. We currently have the following events scheduled: RAC Performance Tuning Online with Arup Nanda - 30 Apr 2012 Understanding Explain Plans & Index Utilization Online with Dan Hotka - 8-9 May 2012 Writing Optimal SQL & Troubleshooting & Tuning with Jonathan Lewis, Düsseldorf, 9-10 May 2012 Flashback Techniques in Oracle11g with Carl Dudley, München, 14 May 2012 Minimize Downtime with Rolling Upgrade using Data Guard with Uwe Hesse, München, 16 May 2012 Enterprise Business Intelligence 11g Advanced Development Online with Mark Rittman – 24-25 May Mastering Backup & Recovery with Francisco Munoz Alvarez, Düsseldorf, 29-30 May 2012 Real World Java EE 6 – Re-thinking Best Practice with Adam Bien, München, 15 Jun 2012

    Read the article

  • Register Now, Free Webinar! Driving Self-Service Learning with UPK Knowledge Center

    - by Kathryn Lustenberger
    UPK Proficiency Forum  Driving Self-Service Learning with UPK Knowledge Center July 16, at 11 am Pacific Join Oracle University for the next UPK Proficiency Forum on July 16, at 11 am Pacific. Beth Renstrom and Kathryn Lustenberger from UPK Product Management at Oracle will present an exciting session on "Driving Self-Service Learning with UPK Knowledge Center. Knowledge Center is a powerful, web-based knowledge repository that delivers an out-of-the-box deployment method for UPK content, enables extensive tracking and reporting, and can serve as content repository for UPK and non-UPK content. Hear how your organization can use Knowledge Center to centralize both UPK and non-UPK assets to provide self-service, role-based, curriculum-style learning. Understand how Knowledge Center can be used to deploy a collaborative user and expert environment where users can turn knowledge into productivity, ensure on-going user competency, and measure organizational readiness across your organization. You will walk away from this session with a better understanding of Oracle’s User Productivity Professional; Knowledge Center and all the benefits it has to offer your organization. You won’t want to miss this Free seminar! Attendance is limited. Register Now!

    Read the article

  • Exadata X3 Sales/Pre-Sales/Support Resell Rights Enablement Day

    - by mseika
    Dear Partner, Avnet and Oracle would like to invite you to a FREE Exadata X3 Sales/Pre-Sales/Support Resell Rights Enablement Day which is taking place on Wednesday 16th January 2013 at the Oracle London City offices in Moorgate. We will give you the opportunity to get an in depth understanding of how hardware and software is engineered to work together to create the power and scalability of Exadata. The session will focus on Oracle Exadata fundamentals, features, components and capabilities. The event will be a day long and will give you the opportunity to put any questions to the presenters that will help you understand how to spot an Exadata opportunity and position Exadata in an opportunity.   Register Now Register now or call our Hotline on 01925 856999. When Wednesday 16th January 2013 Duration: 9.30am to 17.00pm Where Oracle Corporation UK Ltd. One South PlaceLondon EC2M 2RB. For directionsplease see thelocation map.   Cost No charge Contact Us Avnet Technology Solutions LimitedClarity House103 Dalton AvenueBirchwood ParkWarringtonWA3 6YBUKT: 01925 856900 F: 01925 856901 E: [email protected] Or find us online:Avnet websiteLinkedInTwitterFacebook

    Read the article

  • Whois status "pending delete" with expiration date in November 2011???

    - by Sylver
    A friend of mine is in the process of being scammed by a domain registrar and I am trying to sort out the mess. However I could use a hand understanding some of the details. He paid for 2 years of domain name registration on 6 november 2009. The whois record reads: Domain ID:XXXXXXXXXX Domain Name:XXXXXXXXX.ORG Created On:06-Nov-2009 09:23:12 UTC Last Updated On:17-Dec-2010 00:15:10 UTC Expiration Date:06-Nov-2011 09:23:12 UTC Sponsoring Registrar:OnlineNIC Inc. (R64-LROR) Status:CLIENT TRANSFER PROHIBITED Status:HOLD Status:PENDING DELETE SCHEDULED FOR RELEASE Registrant ID:ONLC-XXXXXXX-X Registrant Name:My friend's name ... Registrant Email:Old email The registrar charged a renewal fee a week ago and is now asking an extra $150 to "reclaim" the domain name, even though the domain name is apparently still in my friend's name and it looks like there is still another 10 months before the expiry date. The expiration date on the WhoIs record looks right (Nov 2011), so I don't understand why the domain status says "PENDING DELETE SCHEDULED FOR RELEASE". Can someone explain me better what the deal is and explain what I need to do get the domain name transfered to a more honest registrar? I already have a registrar for my own domain names, been using them for 10 years without problems, so I know where to transfer the domain names to, I just don't know how to proceed.

    Read the article

  • Painting with pixel shaders

    - by Gustavo Maciel
    I have an almost full understanding of how 2D Lighting works, saw this post and was tempted to try implementing this in HLSL. I planned to paint each of the layers with shaders, and then, combine them just drawing one on top of another, or just pass the 3 textures to the shader and getting a better way to combine them. Working almost as planned, but I got a little question in the matter. I'm drawing each layer this way: GraphicsDevice.SetRenderTarget(lighting); GraphicsDevice.Clear(Color.Transparent); //... Setup shader SpriteBatch.Begin(SpriteSortMode.Immediate, BlendState.AlphaBlend, SamplerState.LinearClamp, DepthStencilState.None, RasterizerState.CullNone, lightingShader); SpriteBatch.Draw(texture, fullscreen, Color.White); SpriteBatch.End(); GraphicsDevice.SetRenderTarget(darkMask); GraphicsDevice.Clear(Color.Transparent); //... Setup shader SpriteBatch.Begin(SpriteSortMode.Immediate, BlendState.AlphaBlend, SamplerState.LinearClamp, DepthStencilState.None, RasterizerState.CullNone, darkMaskShader); SpriteBatch.Draw(texture, fullscreen, Color.White); SpriteBatch.End(); Where lightingShader and darkMaskShader are shaders that, with parameters (view and proj matrices, light pos, color and range, etc) generate a texture meant to be that layer. It works fine, but I'm not sure if drawing a transparent quad on top of a transparent render target is the best way of doing it. Because I actually just need the position and params. Concluding: Can I paint a texture with shaders without having to clear it and then draw a transparent texture on top of it?

    Read the article

  • What happens to remounted data/directories

    - by cauon
    According to suggestions in this post I am trying to improve my system to run better with a Solid State Drive. But regarding to RAMdisks and /etc/fstab usage I have some understanding problems coming up. So let's say I add the following lines to /etc/fstab tmpfs /tmp tmpfs defaults,noatime,nodiratime,mode=1777 0 0 tmpfs /var/spool tmpfs defaults,noatime,nodiratime,mode=1777 0 0 tmpfs /var/tmp tmpfs defaults,noatime,nodiratime,mode=1777 0 0 tmpfs /var/log tmpfs defaults,noatime,nodiratime,mode=0755 0 0 I know that on startup these locations should now get mounted into the RAM (hopefully). But what happens to the physical space that was mounted on those places before? Is it gone? Will it be back when I edit my /etc/fstab back to the Version without tmpfs? Will the space still be allocated on my SSD in a way that I can't use it for any other data? Sometimes it is suggested to add the following line, too: none /var/cache aufs dirs=/tmp:/var/cache=ro 0 0 What does this actually do? I noticed that /var/cache takes almost 1GB of space on my harddisk. So should i clear the directory before activating this line? (this is related to the former question) This causes me some confusions and I hope you can give me some clarifications. UPDATE I've downloaded a image with 600MB in size into /tmp that is mounted with the tmpfs settings above. Now I wanted to compare the RAM usage before and after the download. I expect the RAM usage to be increased by 600MB after the download. But the System Monitoring Tool showed me no changes at all. How can this be? Does tmpfs work other than I actually expect it to?

    Read the article

  • Book Review: Professional WCF 4

    - by Sam Abraham
    My Investigation of WCF internals have set the right stage to revisit Professional WCF 4 by Pablo Cibraro, Kurt Claeys, Fabio Cozzolino and Johann Grabner. In this book, the authors dive deep into all aspects of the WCF API in a reading targeted towards intermediate and advanced developers. Book quality so far as presentation, code completeness, content clarity and organization was superb. The authors have taken a hands-on approach to thoroughly covering the WCF 4.0 API with three chapters totaling 100+ pages completely dedicated to business cases with downloadable source code readily available. Chapter 1 outlines SOA best-practice considerations. Next three chapters take a top-down approach to the WCF API covering service and data contracts, bindings, clients, instancing and Workflow Services followed by another carefully-thought three chapters covering the security options available via the WCF API. In conclusion, Professional WCF 4.0 provides a thorough coverage of the WCF API and is a recommended read for anybody looking to reinforce their understanding of the various features available in the WCF framework. Many thanks to the Wiley/Wrox User Group Program for their support of our West Palm Beach Developers’ Group.   All the best, --Sam

    Read the article

  • How lookaheads are propagated in "channel" method of building LALR parser?

    - by greenoldman
    The method is described in Dragon Book, however I read about it in ""Parsing Techniques" by D.Grune and C.J.H.Jacobs". I start from my understanding of building channels for NFA: channels are built once, they are like water channels with current you "drop" lookahead symbols in right places (sources) of the channel, and they propagate with "current" when symbol propagates, there are no barriers (the only sufficient things for propagation are presence of channel and direction/current); i.e. lookahead cannot just die out of the blue Is that right? If I am correct, then eof lookahead should be present in all states, because the source of it is the start production, and all other production states are reachable from start state. How the DFA is made out of this NFA is not perfectly clear for me -- the authors of the mentioned book write about preserving channels, but I see no purpose, if you propagated lookaheads. If the channels have to be preserved, are they cut off from the source if the DFA state does not include source NFA state? I assume no -- the channels still runs between DFA states, not only within given DFA state. In the effect eof should still be present in all items in all states. But when you take a look at DFA presented in book (pdf is from errata): DFA for LALR (fig. 9.34 in the book, p.301) you will see there are items without eof in lookahead. The grammar for this DFA is: S -> E E -> E - T E -> T T -> ( E ) T -> n So how it was computed, when eof was dropped, and on what condition? Update It is textual pdf, so two interesting states (in DFA; # is eof): State 1: S--- >•E[#] E--- >•E-T[#-] E--- >•T[#-] T--- >•n[#-] T--- >•(E)[#-] State 6: T--- >(•E)[#-)] E--- >•E-T[-)] E--- >•T[-)] T--- >•n[-)] T--- >•(E)[-)] Arc from 1 to 6 is labeled (.

    Read the article

  • Gap in parallaxing background loop

    - by CinetiK
    The bug here is that my background kind of offset a bit itself from where it should draw and so I have this line. I have some troubles understanding why I get this bug when I set a speed that is different then 1,2,4,8,16,... In main class I set the speed depending on the player speed bgSpeed = -(int)playerMoveSpeed.X / 10; and here's my background class class ParallaxingBackground { Texture2D texture; Vector2[] positions; public int Speed { get; set;} public void Initialize(ContentManager content, String texturePath, int screenWidth, int speed) { texture = content.Load<Texture2D>(texturePath); this.Speed = speed; positions = new Vector2[screenWidth / texture.Width + 2]; for (int i = 0; i < positions.Length; i++) { positions[i] = new Vector2(i * texture.Width, 0); } } public void Update() { for (int i = 0; i < positions.Length; i++) { positions[i].X += Speed; if (Speed <= 0) { if (positions[i].X <= -texture.Width) { positions[i].X = texture.Width * (positions.Length - 1); } } else { if (positions[i].X >= texture.Width*(positions.Length - 1)) { positions[i].X = -texture.Width; } } } } public void Draw(SpriteBatch spriteBatch) { for (int i = 0; i < positions.Length; i++) { spriteBatch.Draw(texture, positions[i], Color.White); } } }

    Read the article

  • Improve Customer Experience with Real-Time Scheduling

    - by ruth.donohue
    Recently, my husband rearranged his busy work schedule so that he could stay home an entire afternoon to wait for the alarm company to reset the password to our alarm system, only to discover at the end of the afternoon that the field service rep wasn’t going to be able to make the appointment after all. And, the company asked him to reschedule and block off time for another afternoon. Needless to say, my husband wasn’t happy with that experience. Unfortunately, customer experiences like this happen every day. As a business, you can’t afford these types of encounters. It’s too easy for your customers to turn to one of your competitors once they’ve reached the point of frustration. Customer experience and customer loyalty are more important than ever. So how can you prevent something like this from occurring? With the newly available Siebel Field Service Integration with Oracle Real-Time Scheduler, your service organization can: Create cost-optimized plans and schedules to improve operating efficiencies Deliver more accurate ETA’s and shorten appointment windows Minimize the impact of in-day events such as delays on site, sickness, poor weather conditions, and vehicle breakdowns Rather than requiring them to wait for an entire afternoon, imagine asking customers to be available for only an hour. And being able to commit to that time by working around unforeseen events and understanding the impact of delays or re-routings before they become customer issues. What would your customer experience and customer satisfaction be like then? Learn more about the Siebel Field Service Integration with Oracle Real-Time Scheduler: Register for and attend the upcoming webcast on Thursday, March 10th at 8:30 AM Pacific Time Read the press release, data sheet, and solution brief Visit the Siebel Field Service webpage

    Read the article

  • Installation experiences with NDepend under Win7/64 with restricted user permissions

    - by Marko Apfel
    Today Patrick gives me a new license for his static code analysis tool NDepend for my fresh machine with Win7/64. This platform is new for me, so some things are different to Win XP. Maybe that till yet some of these things are not well enough understandanded from me. So i stepped in some traps. Here are my notes to get NDepend running. Download of NDepend Professional Edition from http://www.ndepend.com/NDependDownload.aspx   Extracted to c:\program files (x86)\NDepend   Started NDepend.Install.VisualStudioAddin.exe this failed with Okay – sounds plausible.   Copy NDependProLicense.xml to this folder   Next try with NDepend.Install.VisualStudioAddin.exe opens the integration dialog   Registering in Visual Studio failed with   Manually unblock as described (first solution hint)   and here comes my largest understanding problem. After unblocking this file   and closing this dialog the next opening shows the blocking again: Why? So the same error during integration pops up.   Okay – tried the second solution hint with copying folders Copy all to a full accessable folder under c:\temp\   Now the installation works   looks good   copying the folders back to c:\program files (x86)\NDepend   starting Visual Studio failed with     Okay – copying the folder to a private application folder c:\users\apf\My Applications\NDepend   Installing again   Now Visual Studio runs and NDepend is integrated Nevertheless my machine is only used by me, i prefer “all user”-installations. The described way works sadly only for my account.

    Read the article

  • Beyond S&OP: Integrated Business Planning

    - by Paul Homchick
    In most corporations, planning is done at the department level — leaving disconnects and gaps across different departments. Finance sets revenue and profit goals with minimum validation from Manufacturing that the company has the resources, material, capacity, or demand to reach these goals. On the operations side, Manufacturing is developing plans to balance demand and supply but seldom knows if the resulting "plan" will meet the budgets on which the company's revenue and profit goals are based. The Sales department agrees to quotas that meet Finance's revenue goals without a complete understanding of what manufacturing can deliver. Integrated Business Planning (IBP) bridges these gaps in corporate planning systems. Integrated Business Planning integrates the financial planning provided by EPM systems with operations planning provided by Sales and Operations Planning solutions. This means that revenue goals and budgets are validated against a bottom-up operating plan, and that the operating plan is reconciled against financial goals. When detailed changes are made to the operations plan, planners can immediately see the big picture impact of the changes. IBP also addresses one the CFO's big concerns—the reliability of the revenue forecast. Operating plans are updated daily or weekly from a precise forecast based on current market conditions. These updated plans are then made available so that financial analysts are working with data that best represents what is going to happen - not what they projected would happen based on last quarter's data. For a discussion in more depth, see my article: Improve Reliability of Financial Forecasts with Integrated Business Planning in Supply & Demand Chain-Executive Magazine.

    Read the article

  • Beyond S&OP: Integrated Business Planning

    - by Paul Homchick
    In most corporations, planning is done at the department level — leaving disconnects and gaps across different departments. Finance sets revenue and profit goals with minimum validation from Manufacturing that the company has the resources, material, capacity, or demand to reach these goals. On the operations side, Manufacturing is developing plans to balance demand and supply but seldom knows if the resulting "plan" will meet the budgets on which the company's revenue and profit goals are based. The Sales department agrees to quotas that meet Finance's revenue goals without a complete understanding of what manufacturing can deliver. Integrated Business Planning (IBP) bridges these gaps in corporate planning systems. Integrated Business Planning integrates the financial planning provided by EPM systems with operations planning provided by Sales and Operations Planning solutions. This means that revenue goals and budgets are validated against a bottom-up operating plan, and that the operating plan is reconciled against financial goals. When detailed changes are made to the operations plan, planners can immediately see the big picture impact of the changes. IBP also addresses one the CFO's big concerns—the reliability of the revenue forecast. Operating plans are updated daily or weekly from a precise forecast based on current market conditions. These updated plans are then made available so that financial analysts are working with data that best represents what is going to happen - not what they projected would happen based on last quarter's data. For a discussion in more depth, see my article: Improve Reliability of Financial Forecasts with Integrated Business Planning in Supply & Demand Chain-Executive Magazine.

    Read the article

  • Efficient mapping layout in 2D side-scroller, and collisions between character and the world

    - by Jack
    I haven't touched Visual Studio for a couple months now, but I was playing a game from the '90s toady and had an epiphany: I was looking for something what i didn't need, and I wasn't using what I knew correctly. One of those realizations was collision, so let me tell you a bit about my project that I was working on. The project's graphics looks like Mario or Dangerous Dave, etc., you get the idea - old-school pixels. So anyway I remember trying to think of something else than AABB for character form, but I couldn't think of anything. Perhaps I could get a suggestion for this? Another thing is the world - I don't want it to be just linear world, I want mountains, etc.. My idea is to use triangles, and no idea yet what to do if I want just part of the cube, say 3/4 or 2/4 or whatever. Hard-coding such things seems inefficient. P.S. I am not looking at the precision level offered by Box2D. Actually I remember trying to implement it at first, but I failed as my understanding of C++ wasn't advanced enough, as it'll be mentioned below. P.P.S. I am programming in C++, and I haven't done it for a couple months now. I have no means of testing it either, as my PC is broken down, and this one can barely run games from late '90s, not to speak about a compiler or a program with inefficient resource management... I am also not an expert (obviously), I don't even know if I can consider myself an average programmer. In short, I am simply curious about my thoughts and my past experience when programming the game. I may come back to it when my PC is fixed, I'm already filling a note about these things.

    Read the article

  • links for 2011-02-04

    - by Bob Rhubart
    Oracle WebCenter Suite - Giving Users a Modern Experience: Webcast Q&A (Oracle Enterprise 2.0 Blog) Kellsey Ruppel share a summary of the viewer Q&A from the recent Oracle WebCenter Suite webcast. (tags: oracle otn enterprise2.0 webcenter) Oracle Fusion Middleware Security: Oracle Access Manager 11g Academy: The Policy Model (Part 1) Brian Eidelman kicks off a series of posts covering Oracle Access Manager. (tags: oracle otn fusionmiddleware security) The Tom Kyte Blog: A short podcast... Oracle senior technical architect Tom Kyte shares information on a series of upcoming live, in-person events in which he will participate. (tags: oracle otn ioug) Oracle and AIIM - Putting Enterprise 2.0 to Work (Oracle Enterprise 2.0 Blog) Brian Dirking shares a recap of the recent online Enterprise 2.o presentation by Andy MacMillan (Oracle) and Doug Miles (AIIM). (tags: oracle otn enterprise2.0) Arun Gupta: WebLogic Developer/Production Web Profile, Full Java EE 6 Platform - Chat Transcript and Slides from OTN Virtual Developer Day Arun Gupta shares chat transcripts and more from the recent OTN Virtual Developer Day focused on WebLogic. . (tags: weblogic java) Andrejus Baranovskis's Blog: How to Install Oracle ECM 11g PS3 - Domain Configuration Hint Concise instructions from Oracle ACE Director Andrejus Baranovski. (tags: oracle otn oracleace enterprise2.0 weblogic) Oracle BI EE 11g & Oracle ADF - Part 1 - Understanding Security Integration Rittman Mead's Venkatakrishnan J explores "how much Oracle ADF or the Oracle Fusion Middleware has influenced most of the features in BI EE 11g." (tags: oracle oracleace businessintelligence obiee) Gone With the Wind: Where Have All the Composites Gone? SOA author Antony Reynolds solves a mystery. (tags: oracle otn soa) Playing with Oracle 11gR2, OEL 5.6 and VirtualBox 4.0.2 (1st Part) "This installation should never be used for Production or Development purposes. This installation was created for educational purpose only, and is extremely helpful to learn and understand how Oracle works if you do not have access to a traditional hardware resource." - Oracle ACE Director Francisco Munoz Alvarez (tags: oracle otn virtualbox virtualization)

    Read the article

  • Reading Open Source Projects

    - by Hossein
    About my programming knowledge: I have basic knowledge of programming. Have never worked in a team project. Have done, only, a couple of small solo projects. Problem: Consider a typical open source project. We want to know about the project, so that we can contribute, test it, just out of curiosity, etc. The documentations do not describe the code architecture, usually, so RTFM(!) wouldn't apply here. They usually tend to describe how to use the software, not how it is designed. Mailing Lists in big projects are very crowded. Tens of e-mails are send in just an hour. also, following the mailing list to know the project is like understanding a film when you have arrived late, when half of the film is gone or even worse. Obviously, the code is not like a novel. So, you can't start from downloading the source code and just "start from the first page and so on"! Question: How should one understand open source project? What are the steps to do in an open source project, to understand how the whole thing works and get into speed with the "under the hood"?

    Read the article

  • When not to use Spring to instantiate a bean?

    - by Rishabh
    I am trying to understand what would be the correct usage of Spring. Not syntactically, but in term of its purpose. If one is using Spring, then should Spring code replace all bean instantiation code? When to use or when not to use Spring, to instantiate a bean? May be the following code sample will help in you understanding my dilemma: List<ClassA> caList = new ArrayList<ClassA>(); for (String name : nameList) { ClassA ca = new ClassA(); ca.setName(name); caList.add(ca); } If I configure Spring it becomes something like: List<ClassA> caList = new ArrayList<ClassA>(); for (String name : nameList) { ClassA ca = (ClassA)SomeContext.getBean(BeanLookupConstants.CLASS_A); ca.setName(name); caList.add(ca); } I personally think using Spring here is an unnecessary overhead, because The code the simpler to read/understand. It isn't really a good place for Dependency Injection as I am not expecting that there will be multiple/varied implementation of ClassA, that I would like freedom to replace using Spring configuration at a later point in time. Am I thinking correct? If not, where am I going wrong?

    Read the article

  • What to include in metadata?

    - by shyam
    I'm wondering if there are any general guidelines or best practices regarding when to split data into a metadata format, as oppose to directly embedding it within the data. (Specific example below). My understanding of metadata is that it describes data (without the need to actually look at the data), allowing for data to be quickly search/filtered for easy access. Let's take for example a simple 3D model format. The actual data file itself is a binary file containing vertices and colors. Things like creation date, modified data and author name would be things that describe the binary data, so I would say these belong as metadata (outside of the binary file). But what if the application had no need to search or filter by these fields? Would it be acceptable to embed these fields directly into the binary data itself? Could they be duplicated in both the binary data and the meta data, or would this be considered bad practice? What about more ambiguous fields such as the model name, which could be considered part of the data itself, but also as data describing the binary data?... How do you decide which data to embed in the actual binary file, as opposed to separating into a more flexible metadata format? Thanks!

    Read the article

  • What is the reason for section 1 of LGPL and what is the implication for section 9.

    - by Roland Schulz
    Why was section 1 added to LGPLv3? My understanding of section 3&4 is, one can convey the combined work under any license and with no requirements from GPLv3 (besides those explicitly stated as requirements in LGPLv3 3&4). Given that, why is section 1 necessary. Wouldn't that sections 3&4 by themselves already imply anyhow what section 1 explicitly states? I assume that I'm missing something and section 1 isn't redundant. Assuming that, does this have implications for other sections in GPLv3? E.g. does conveying a covered work under sections 3&4 fall under the patent clause of section 10 of GPLv3? Why does section 1 not also state an exception for section 10? Put another way. Is the Eigen FAQ correct by stating: LGPL requires [for header only libraries] pretty much the same as the 2-clause BSD license. It it true that for conveying object files including material from LGPLv3 headers no GPLv3 patent clauses apply?

    Read the article

  • Question about mipmaps + anisotropic filtering

    - by Telanor
    I'm a bit confused here and maybe someone can explain this to me. I created a simple test texture for my terrain which is nothing more than a solid green color with a black grid overlayed on top of it. If I look at the terrain in the distance with mipmapping on and linear filtering, the grid lines become blurry fairly quickly and further back the grid is pretty much invisible. With these settings, I don't get any moire patterns at all. If I turn on anisotropic filtering, however, the higher the anisotropic level, the more the terrain looks like it did with without mipmapping. The lines are much crisper nearby but in the distance I start to see terrible moire patterns. My understanding was that mipmapping is supposed to get rid of moire patterns. I've always had anisotropic filtering on in every game I play and I've never noticed any moire patterns as a result, so I don't understand why it's happening in my game. I am using logarithmic depth however, could that be causing any problems? And if it is, how do I resolve it? I've created my sampler state like so (I'm using slimdx): ssa = SamplerState.FromDescription(Engine.Device, new SamplerDescription { AddressU = TextureAddressMode.Clamp, AddressV = TextureAddressMode.Clamp, AddressW = TextureAddressMode.Clamp, Filter = Filter.Anisotropic, MaximumAnisotropy = anisotropicLevel, MinimumLod = 0, MaximumLod = float.MaxValue });

    Read the article

  • Which web framework or technologies would suit me?

    - by Suraj Chandran
    Hi, I had been working on desktop apps and server side(non web) for some time and now I am diving in to web first time. I plan to write a scalable enterprise level app. I have worked with Java, Javascript, Jquery etc. but I absolutely hate jsp. So is there any framework that focuses on developing enterprise level web apps without jsp. I liked Wicket's approach, but I think there is a little lack of support of dynamic html in it and jquery(yes i looked at wiquery). Also I feel making wicket apps scalable would take some sweat. Can Spring MVC, Struts2 etc. help me make with this with just using say Java, JavaScript, and JQuery. Or are there any other options for me like Wicket. Please do forgive if anything above looks insane, I am still working on my understanding with enterprise web apps. NOTE: If you think that I should take a different direction or approach, please do suggest!

    Read the article

  • I know fundamental programming. But how do I get started in game development now?

    - by Rohan Menon
    I'm a 20 year old programming student. I know fundamental programming in BASIC, C, C++ and JAVA. What I wanted to ask is, where do I go from here? Are there any books that the community can mention that will help me develop a game or at least learn game development? I've had a lot of ideas and really want to make some sort of prototype to see if I'm suited for the industry. I really don't mind learning any new languages but I need to know what I should begin with. A good book that will help with a little more understanding as I go up will be very helpful. Maybe a tutorial to develop some basic 2D games like a side-scroller, snake or pocket tanks in an easy to understand SDK? I know that to get some credit under your belt, you need to be able to make a few games on your own. Also, what platform should I start on? The PC, iOS or Android (as an introduction) for now. I don't want to get into high level game design just yet. Just something a bit basic to help out in future development. Anything pointing me in the right direction will be really really helpful. Edit: Also, I want to say that I'm looking towards this from a game designer's point of view more than a game programmer. I want suggestions on any SDKs or easy to use programs I can use to understand game design. Then delve deeper into the programming after that. Not as employment but as developing your own games (for now).

    Read the article

  • Requesting quality analysis test cases up front of implementation/change

    - by arin
    Recently I have been assigned to work on a major requirement that falls between a change request and an improvement. The previous implementation was done (badly) by a senior developer that left the company and did so without leaving a trace of documentation. Here were my initial steps to approach this problem: Considering that the release date was fast approaching and there was no time for slip-ups, I initially asked if the requirement was a "must have". Since the requirement helped the product significantly in terms of usability, the answer was "If possible, yes". Knowing the wide-spread use and affects of this requirement, had it come to a point where the requirement could not be finished prior to release, I asked if it would be a viable option to thrash the current state and revert back to the state prior to the ex-senior implementation. The answer was "Most likely: no". Understanding that the requirement was coming from the higher management, and due to the complexity of it, I asked all usability test cases to be written prior to the implementation (by QA) and given to me, to aid me in the comprehension of this task. This was a big no-no for the folks at the management as they failed to understand this approach. Knowing that I had to insist on my request and the responsibility of this requirement, I insisted and have fallen out of favor with some of the folks, leaving me in a state of "baffledness". Basically, I was trying a test-driven approach to a high-risk, high-complexity and must-have requirement and trying to be safe rather than sorry. Is this approach wrong or have I approached it incorrectly? P.S.: The change request/improvement was cancelled and the implementation was reverted back to the prior state due to the complexity of the problem and lack of time. This only happened after a 2 hour long meeting with other seniors in order to convince the aforementioned folks.

    Read the article

  • Register now for the UK Windows Azure Self-paced Interactive Learning Course starting May 10th

    - by Eric Nelson
    [Suggested twitter tag #selfpacedazure] We (myself and David Gristwood) have been working in the UK to create a fantastic opportunity to get yourself up to speed on the Windows Azure Platform over a 6 week period starting May 10th – without ever needing to leave the comfort of your home/office.  The course is derived from the internal training Microsoft gives on Azure which is both fun and challenging in equal parts – and we felt was just too good to keep to ourselves! We will be releasing more details nearer the date but hopefully the following is enough to convince you to register and … recommend it to a colleague or three :-) What we have produced is the “Microsoft Azure Self-paced Learning Course”. This is a free, interactive, self-paced, technical training course covering the Windows Azure platform – Windows Azure, SQL Azure and the Azure AppFabric. The course takes place over a six week period finishing on June 18th. During the course you will work from your own home or workplace, and get involved via interactive Live Meetings session, watch on-line videos, work through hands-on labs and research and complete weekly coursework assignments. The mentors and other attendees on the course will help you in your research and learning, and there are weekly Live Meetings where you can raise questions and interact with them. This is a technical course, aimed at programmers, system designers, and architects who want a solid understanding of the Microsoft Windows Azure platform, hence a prerequisite for this course is at least six months programming in the .NET framework and Visual Studio. Check out the full details of the event or go straight to registration.   The course outline is: Week 1 - Windows Azure Platform Week 2 - Windows Azure Storage Week 3 - Windows Azure Deep Dive and Codename "Dallas" Week 4 - SQL Azure Week 5 - Windows Azure Platform AppFabric Access Control Week 6 - Windows Azure Platform AppFabric Service Bus If you have any questions about the course and its suitability, please email [email protected].

    Read the article

  • OOP for unit testing : The good, the bad and the ugly

    - by Jeff
    I have recently read Miško Hevery's pdf guide to writing testable code in which its stated that you should limit your classes instanciations in your constructors. I understand that its what you should do because it allow you to easily mock you objects that are send as parameters to your class. But when it comes to writing actual code, i often end up with things like that (exemple is in PHP using Zend Framework but I think it's self explanatory) : class Some_class { private $_data; private $_options; private $_locale; public function __construct($data, $options = null) { $this->_data = $data; if ($options != null) { $this->_options = $options; } $this->_init(); } private function _init() { if(isset($this->_options['locale'])) { $locale = $this->_options['locale']; if ($locale instanceof Zend_Locale) { $this->_locale = $locale; } elseif (Zend_Locale::isLocale($locale)) { $this->_locale = new Zend_Locale($locale); } else { $this->_locale = new Zend_Locale(); } } } } Acording to my understanding of Miško Hevery's guide, i shouldn't instanciate the Zend_Local in my class but push it through the constructor (Which can be done through the options array in my example). I am wondering what would be the best practice to get the most flexibility for unittesing this code and aswell, if I want to move away from Zend Framework. Thanks in advance

    Read the article

< Previous Page | 84 85 86 87 88 89 90 91 92 93 94 95  | Next Page >