Search Results

Search found 27973 results on 1119 pages for 'power point vba'.

Page 415/1119 | < Previous Page | 411 412 413 414 415 416 417 418 419 420 421 422  | Next Page >

  • What Will Happen to Real Estate Leases when Operating Leases are Gone?

    - by Theresa Hickman
    Many people are concerned about what will happen to real estate leases when FASB and IASB abolish operating leases. They plan to unveil the proposed standards on treating leases this summer as part of the convergence project but no "finalized ruling" is expected for at least a year because it will need to get formal consensus from many players, such as the SEC, American Association of Investors, Congress, the Big Four, American Associate of Realtors, the international equivalents of these, etc. If your accounting is a bit rusty, an Operating Lease is where you lease equipment or some asset for a shorter period than the actual (expected) life of the asset and then give the asset back while it still has some useful life in it. (Think leasing a car). Because an Operating Lease does not contain any of the provisions that would qualify it as a Capital Lease, the lease is not treated as a sale or purchase and hits the lessee's rental expense and the lessor's revenue. So it all stays on the P&L (assuming no prepayments are made). Capital Leases, on the other hand, hit lessee's and lessor's balance sheets because the asset is treated as a sale. (I'm ignoring interest and depreciation here to emphasize my point). Question: What will happen to real estate leases when Operating Leases go away and how will Oracle Financials address these changes? Before I attempt to address these questions, here's a real-life example to expound on some of the issues: Let's say a U.S. retailer leases a store in a mall for 15 years. Under U.S. GAAP, the lease is considered an operating or expense lease. Will that same lease be considered a capital lease under IFRS? Real estate leases are supposedly going to be capitalized under IFRS. If so, will everyone need to change all leases from operating to capital? Or, could we make some adjustments so we report the lease as an expense for operations reporting but capitalize it for SEC reporting? Would all aspects of the lease be capitalized, or would some line items still be expensed? For example, many retail store leases are defined to include (1) the agreed-to rent amount; (2) a negotiated increase in base rent, e.g., maybe a 5% increase in Year 5; (3) a sales rent component whereby the retailer pays a variable additional amount based on the sales generated in the prior month; (4) parking lot maintenance fees. Would the entire lease be capitalized, or would some portions still be expensed? To help answer these questions, I met up with our resident accounting expert and walking encyclopedia, Seamus Moran. Here's what he had to say: Oracle is aware of the potential changes specific to reporting/capitalization of real estate leases; i.e., we are aware that FASB and IASB have identified real estate leases as one of the areas for standards convergence. Oracle stays apprised of the on-going convergence through our domain expertise staff, our relationship with customers, our market awareness, and, of course, our relationships with the Big 4. This is part of our normal process with respect to regulatory compliance worldwide. At this time, Oracle expects that the standards convergence committee will make a recommendation about reporting standards for real estate leases in about a year. Following typical procedures, we also expect that the recommendation will be up for review for a year, and customers will then need to start reporting to the new standard about a year after that. So that means we would expect the first customer to report under the new standard in maybe 3 years. Typically, after the new standard is finalized and distributed, we find that our customers then begin to evaluate how they plan to meet the new standard. And through groups like the Customer Advisory Boards (CABs), our customers tell us what kind of product changes are needed in order to satisfy their new reporting requirements. Of course, Oracle is also working with the Big 4 and Accenture and other implementers in order to ascertain that these recommended changes will indeed meet new reporting standards. So the best advice we can offer right now is, stay apprised of the standards convergence committee; know that Oracle is also staying abreast of developments; get involved with your CAB so your voice is heard; know that Oracle products continue to be GAAP compliant, and we will continue to maintain that as our standard. But exactly what is that "standard"--we need to wait on the standards convergence committee. In a nut shell, operating leases will become either capital leases or month to month rentals, but it is still too early, too political and too uncertain to call out at this point.

    Read the article

  • HTML Javascript Hidden Object or Photo Hunt Game

    - by PeteT
    Is anyone aware of any example photo hunt/hidden object games either in HTML and Javascript or flash if necessary? I am having trouble finding one, I may be using the wrong words to search. Photo hunt being like the wheres wally/waldo books where you look for wally in a complex image until you find him. So if it were played on screen you would press the location of wally and it would either be correct or wrong, possibly timed. I am hoping to find one where you can just load in your own photos and specify some co-ordinates that match where the hidden object is. A spot the difference example may be useful as a starting point but I haven't found an example of either that is web based yet.

    Read the article

  • Slide Creation Checklist

    - by Daniel Moth
    PowerPoint is a great tool for conference (large audience) presentations, which is the context for the advice below. The #1 thing to keep in mind when you create slides (at least for conference sessions), is that they are there to help you remember what you were going to say (the flow and key messages) and for the audience to get a visual reminder of the key points. Slides are not there for the audience to read what you are going to say anyway. If they were, what is the point of you being there? Slides are not holders for complete sentences (unless you are quoting) – use Microsoft Word for that purpose either as a physical handout or as a URL link that you share with the audience. When you dry run your presentation, if you find yourself reading the bullets on your slide, you have missed the point. You have a message to deliver that can be done regardless of your slides – remember that. The focus of your audience should be on you, not the screen. Based on that premise, I have created a checklist that I go over before I start a new deck and also once I think my slides are ready. Turn AutoFit OFF. I cannot stress this enough. For each slide, explicitly pick a slide layout. In my presentations, I only use one Title Slide, Section Header per demo slide, and for the rest of my slides one of the three: Title and Content, Title Only, Blank. Most people that are newbies to PowerPoint, get whatever default layout the New Slide creates for them and then start deleting and adding placeholders to that. You can do better than that (and you'll be glad you did if you also follow item #11 below). Every slide must have an image. Remove all punctuation (e.g. periods, commas) other than exclamation points and question marks (! ?). Don't use color or other formatting (e.g. italics, bold) for text on the slide. Check your animations. Avoid animations that hide elements that were on the slide (instead use a new slide and transition). Ensure that animations that bring new elements in, bring them into white space instead of over other existing elements. A good test is to print the slide and see that it still makes sense even without the animation. Print the deck in black and white choosing the "6 slides per page" option. Can I still read each slide without losing any information? If the answer is "no", go back and fix the slides so the answer becomes "yes". Don't have more than 3 bullet levels/indents. In other words: you type some text on the slide, hit 'Enter', hit 'Tab', type some more text and repeat at most one final time that sequence. Ideally your outer bullets have only level of sub-bullets (i.e. one level of indentation beneath them). Don't have more than 3-5 outer bullets per slide. Space them evenly horizontally, e.g. with blank lines in between. Don't wrap. For each bullet on all slides check: does the text for that bullet wrap to a second line? If it does, change the wording so it doesn't. Or create a terser bullet and make the original long text a sub-bullet of that one (thus decreasing the font size, but still being consistent) and have no wrapping. Use the same consistent fonts (i.e. Font Face, Font Size etc) throughout the deck for each level of bullet. In other words, don't deviate form the PowerPoint template you chose (or that was chosen for you). Go on each slide and hit 'Reset'. 'Reset' is a button on the 'Home' tab of the ribbon or you can find the 'Reset Slide' menu when you right click on a slide on the left 'Slides' list. If your slides can survive doing that without you "fixing" things after the Reset action, you are golden! For each slide ask yourself: if I had to replace this slide with a single sentence that conveys the key message, what would that sentence be? This exercise leads you to merge slides (where the key message is split) or split a slide into many, if there were too many key messages on the slide in the first place. It can also lead you to redesign a slide so the text on it really is just explanation or evidence for the key message you are trying to convey. Get the length right. Is the length of this deck suitable for the time you have been given to present? If not, cut content! It is far better to deliver less in a relaxed, polished engaging, memorable way than to deliver in great haste more content. As a rule of thumb, multiply 2 minutes by the number of slides you have, add the time you need for each demo and check if that add to more than the time you have allotted. If it does, start cutting content – we've all been there and it has to be done. As always, rules and guidelines are there to be bent and even broken some times. Start with the above and on a slide-by-slide basis decide which rules you want to bend. That is smarter than throwing all the rules out from the start, right? Comments about this post welcome at the original blog.

    Read the article

  • Slide Creation Checklist

    PowerPoint is a great tool for conference (large audience) presentations, which is the context for the advice below. The #1 thing to keep in mind when you create slides (at least for conference sessions), is that they are there to help you remember what you were going to say (the flow and key messages) and for the audience to get a visual reminder of the key points. Slides are not there for the audience to read what you are going to say anyway. If they were, what is the point of you being there?...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • python, cluster computing, design help [closed]

    - by j dawg
    I would like to create my own parallel computing server. Can you please point me to some resources I can use to help me develop my server. Sorry, like I said I need help getting started. Yes, I am limited to python, I cannot use C. I am using a bunch of workstations and I want to use all the cpus in those machines. So what I am looking for is blog posts, books, articles that can help me develop my own client/server tools to send code from the client to the servers and spawn python processes based on the number of cpus. I know how to do the subprocessing/multiprocessing part of the program, I do not know how to create the server that will take the client's requests. I also need to figure out what is the best way to handle sending file data, like netcdf files or other spatial data. Any suggestions very welcome.

    Read the article

  • List of Commonly Used Value Types in XNA Games

    - by Michael B. McLaughlin
    Most XNA programmers are concerned about generating garbage. More specifically about allocating GC-managed memory (GC stands for “garbage collector” and is both the name of the class that provides access to the garbage collector and an acronym for the garbage collector (as a concept) itself). Two of the major target platforms for XNA (Windows Phone 7 and Xbox 360) use variants of the .NET Compact Framework. On both variants, the GC runs under various circumstances (Windows Phone 7 and Xbox 360). Of concern to XNA programmers is the fact that it runs automatically after a fixed amount of GC-managed memory has been allocated (currently 1MB on both systems). Many beginning XNA programmers are unaware of what constitutes GC-managed memory, though. So here’s a quick overview. In .NET, there are two different “types” of types: value types and reference types. Only reference types are managed by the garbage collector. Value types are not managed by the garbage collector and are instead managed in other ways that are implementation dependent. For purposes of XNA programming, the important point is that they are not managed by the GC and thus do not, by themselves, increment that internal 1 MB allocation counter. (n.b. Structs are value types. If you have a struct that has a reference type as a member, then that reference type, when instantiated, will still be allocated in the GC-managed memory and will thus count against the 1 MB allocation counter. Putting it in a struct doesn’t change the fact that it gets allocated on the GC heap, but the struct itself is created outside of the GC’s purview). Both value types and reference types use the keyword ‘new’ to allocate a new instance of them. Sometimes this keyword is hidden by a method which creates new instances for you, e.g. XmlReader.Create. But the important thing to determine is whether or not you are dealing with a value types or a reference type. If it’s a value type, you can use the ‘new’ keyword to allocate new instances of that type without incrementing the GC allocation counter (except as above where it’s a struct with a reference type in it that is allocated by the constructor, but there are no .NET Framework or XNA Framework value types that do this so it would have to be a struct you created or that was in some third-party library you were using for that to even become an issue). The following is a list of most all of value types you are likely to use in a generic XNA game: AudioCategory (used with XACT; not available on WP7) AvatarExpression (Xbox 360 only, but exposed on Windows to ease Xbox development) bool BoundingBox BoundingSphere byte char Color DateTime decimal double any enum (System.Enum itself is a class, but all enums are value types such that there are no GC allocations for enums) float GamePadButtons GamePadCapabilities GamePadDPad GamePadState GamePadThumbSticks GamePadTriggers GestureSample int IntPtr (rarely but occasionally used in XNA) KeyboardState long Matrix MouseState nullable structs (anytime you see, e.g. int? something, that ‘?’ denotes a nullable struct, also called a nullable type) Plane Point Quaternion Ray Rectangle RenderTargetBinding sbyte (though I’ve never seen it used since most people would just use a short) short TimeSpan TouchCollection TouchLocation TouchPanelCapabilities uint ulong ushort Vector2 Vector3 Vector4 VertexBufferBinding VertexElement VertexPositionColor VertexPositionColorTexture VertexPositionNormalTexture VertexPositionTexture Viewport So there you have it. That’s not quite a complete list, mind you. For example: There are various structs in the .NET framework you might make use of. I left out everything from the Microsoft.Xna.Framework.Graphics.PackedVector namespace, since everything in there ventures into the realm of advanced XNA programming anyway (n.b. every single instantiable thing in that namespace is a struct and thus a value type; there are also two interfaces but interfaces cannot be instantiated at all and thus don’t figure in to this discussion). There are so many enums you’re likely to use (PlayerIndex, SpriteSortMode, SpriteEffects, SurfaceFormat, etc.) that including them would’ve flooded the list and reduced its utility. So I went with “any enum” and trust that you can figure out what the enums are (and it’s rare to use ‘new’ with an enum anyway). That list also doesn’t include any of the pre-defined static instances of some of the classes (e.g. BlendState.AlphaBlend, BlendState.Opaque, etc.) which are already allocated such that using them doesn’t cause any new allocations and therefore doesn’t increase that 1 MB counter. That list also has a few misleading things. VertexElement, VertexPositionColor, and all the other vertex types are structs. But you’re only likely to ever use them as an array (for use with VertexBuffer or DynamicVertexBuffer), and all arrays are reference types (even arrays of value types such as VertexPositionColor[ ] or int[ ]). * So that’s it for now. The note below may be a bit confusing (it deals with how the GC works and how arrays are managed in .NET). If so, you can probably safely ignore it for now but feel free to ask any questions regardless. * Arrays of value types (where the value type doesn’t contain any reference type members) are much faster for the GC to examine than arrays of reference types, so there is a definite benefit to using arrays of value types where it makes sense. But creating arrays of value types does cause the GC’s allocation counter to increase. Indeed, allocating a large array of a value type is one of the quickest ways to increment the allocation counter since a .NET array is a sequential block of memory. An array of reference types is just a sequential block of references (typically 4 bytes each) while an array of value types is a sequential block of instances of that type. So for an array of Vector3s it would be 12 bytes each since each float is 4 bytes and there are 3 in a Vector3; for an array of VertexPositionNormalTexture structs it would typically be 32 bytes each since it has two Vector3s and a Vector2. (Note that there are a few additional bytes taken up in the creation of an array, typically 12 but sometimes 16 or possibly even more, which depend on the implementation details of the array type on the particular platform the code is running on).

    Read the article

  • Why lock-free data structures just aren't lock-free enough

    - by Alex.Davies
    Today's post will explore why the current ways to communicate between threads don't scale, and show you a possible way to build scalable parallel programming on top of shared memory. The problem with shared memory Soon, we will have dozens, hundreds and then millions of cores in our computers. It's inevitable, because individual cores just can't get much faster. At some point, that's going to mean that we have to rethink our architecture entirely, as millions of cores can't all access a shared memory space efficiently. But millions of cores are still a long way off, and in the meantime we'll see machines with dozens of cores, struggling with shared memory. Alex's tip: The best way for an application to make use of that increasing parallel power is to use a concurrency model like actors, that deals with synchronisation issues for you. Then, the maintainer of the actors framework can find the most efficient way to coordinate access to shared memory to allow your actors to pass messages to each other efficiently. At the moment, NAct uses the .NET thread pool and a few locks to marshal messages. It works well on dual and quad core machines, but it won't scale to more cores. Every time we use a lock, our core performs an atomic memory operation (eg. CAS) on a cell of memory representing the lock, so it's sure that no other core can possibly have that lock. This is very fast when the lock isn't contended, but we need to notify all the other cores, in case they held the cell of memory in a cache. As the number of cores increases, the total cost of a lock increases linearly. A lot of work has been done on "lock-free" data structures, which avoid locks by using atomic memory operations directly. These give fairly dramatic performance improvements, particularly on systems with a few (2 to 4) cores. The .NET 4 concurrent collections in System.Collections.Concurrent are mostly lock-free. However, lock-free data structures still don't scale indefinitely, because any use of an atomic memory operation still involves every core in the system. A sync-free data structure Some concurrent data structures are possible to write in a completely synchronization-free way, without using any atomic memory operations. One useful example is a single producer, single consumer (SPSC) queue. It's easy to write a sync-free fixed size SPSC queue using a circular buffer*. Slightly trickier is a queue that grows as needed. You can use a linked list to represent the queue, but if you leave the nodes to be garbage collected once you're done with them, the GC will need to involve all the cores in collecting the finished nodes. Instead, I've implemented a proof of concept inspired by this intel article which reuses the nodes by putting them in a second queue to send back to the producer. * In all these cases, you need to use memory barriers correctly, but these are local to a core, so don't have the same scalability problems as atomic memory operations. Performance tests I tried benchmarking my SPSC queue against the .NET ConcurrentQueue, and against a standard Queue protected by locks. In some ways, this isn't a fair comparison, because both of these support multiple producers and multiple consumers, but I'll come to that later. I started on my dual-core laptop, running a simple test that had one thread producing 64 bit integers, and another consuming them, to measure the pure overhead of the queue. So, nothing very interesting here. Both concurrent collections perform better than the lock-based one as expected, but there's not a lot to choose between the ConcurrentQueue and my SPSC queue. I was a little disappointed, but then, the .NET Framework team spent a lot longer optimising it than I did. So I dug out a more powerful machine that Red Gate's DBA tools team had been using for testing. It is a 6 core Intel i7 machine with hyperthreading, adding up to 12 logical cores. Now the results get more interesting. As I increased the number of producer-consumer pairs to 6 (to saturate all 12 logical cores), the locking approach was slow, and got even slower, as you'd expect. What I didn't expect to be so clear was the drop-off in performance of the lock-free ConcurrentQueue. I could see the machine only using about 20% of available CPU cycles when it should have been saturated. My interpretation is that as all the cores used atomic memory operations to safely access the queue, they ended up spending most of the time notifying each other about cache lines that need invalidating. The sync-free approach scaled perfectly, despite still working via shared memory, which after all, should still be a bottleneck. I can't quite believe that the results are so clear, so if you can think of any other effects that might cause them, please comment! Obviously, this benchmark isn't realistic because we're only measuring the overhead of the queue. Any real workload, even on a machine with 12 cores, would dwarf the overhead, and there'd be no point worrying about this effect. But would that be true on a machine with 100 cores? Still to be solved. The trouble is, you can't build many concurrent algorithms using only an SPSC queue to communicate. In particular, I can't see a way to build something as general purpose as actors on top of just SPSC queues. Fundamentally, an actor needs to be able to receive messages from multiple other actors, which seems to need an MPSC queue. I've been thinking about ways to build a sync-free MPSC queue out of multiple SPSC queues and some kind of sign-up mechanism. Hopefully I'll have something to tell you about soon, but leave a comment if you have any ideas.

    Read the article

  • CodePlex Daily Summary for Thursday, February 17, 2011

    CodePlex Daily Summary for Thursday, February 17, 2011Popular ReleasesMarr DataMapper: Marr Datamapper 2.7 beta: - Changed QueryToGraph relationship rules: 1) Any parent entity (entity with children) must have at least one PK specified or an exception will be thrown 2) All 1-M relationship entities must have at least one PK specified or an exception will be thrown Only 1-1 entities with no children are allowed to have 0 PKs specified. - fixed AutoQueryToGraph bug (columns in graph children were being included in the select statement)datajs - JavaScript Library for data-centric web applications: datajs version 0.0.2: This release adds support for parsing DateTime and DateTimeOffset properties into javascript Date objects and serialize them back.thinktecture WSCF.blue: WSCF.blue V1 Update (1.0.11): Features Added a new option that allows properties on data contract types to be marked as virtual. Bug Fixes Fixed a bug caused by certain project properties not being available on Web Service Software Factory projects. Fixed a bug that could result in the WrapperName value of the MessageContractAttribute being incorrect when the Adjust Casing option is used. The menu item code now caters for CommandBar instances that are not available. For example the Web Item CommandBar does not exist ...Document.Editor: 2011.5: Whats new for Document.Editor 2011.5: New export to email New export to image New document background color Improved Tooltips Minor Bug Fix's, improvements and speed upsTerminals: Version 2 - RC1: The "Clean Install" will overwrite your log4net configuration (if you have one). If you run in a Portable Environment, you can use the "Clean Install" and target your portable folder. Tested and it works fine. Changes for this release: Re-worked on the Toolstip settings are done, just to avoid the vs.net clash with auto-generating files for .settings files. renamed it to .settings.config Packged both log4net and ToolStripSettings files into the installer Upgraded the version inform...Export Test Cases From TFS: Test Case Export to Excel 1.0: Team Foundation Server (TFS) 2010 enables the users to manage test cases as Work Item(s). The complete description of the test case along with steps can be managed as single Work Item in TFS 2010. Before migrating to TFS 2010 many test teams will be using MS Excel to manage the test cases (or test scripts). However, after migrating to TFS 2010, test teams can manage the test cases in the server but there may be need to get the test cases into excel sheet like approval from Business Analysts ...WriteableBitmapEx: WriteableBitmapEx 0.9.7.0: Fixed many bugs. Added the Rotate method which rotates the bitmap in 90° steps clockwise and returns a new rotated WriteableBitmap. Added a Flip method with support for FlipMode.Vertical and FlipMode.Horizontal. Added a new Filter extension file with a convolution method and some kernel templates (Gaussian, Sharpen). Added the GetBrightness method, which returns the brightness / luminance of the pixel at the x, y coordinate as byte. Added the ColorKeying BlendMode. Added boundary ...AllNewsManager.NET: AllNewsManager.NET 1.3: AllNewsManager.NET 1.3. This new version provide several new features, improvements and bug fixes. Some new features: Online Users. Avatars. Copy function (to create a new article from another one). SEO improvements (friendly urls). New admin buttons. And more...Facebook Graph Toolkit: Facebook Graph Toolkit 0.8: Version 0.8 (15 Feb 2011)moved to Beta stage publish photo feature "email" field of User object added new Graph Api object: Group, Event new Graph Api connection: likes, groups, eventsDJME - The jQuery extensions for ASP.NET MVC: DJME2 -The jQuery extensions for ASP.NET MVC beta2: The source code and runtime library for DJME2. For more product info you can goto http://www.dotnetage.com/djme.html What is new ?The Grid extension added The ModelBinder added which helping you create Bindable data Action. The DnaFor() control factory added that enabled Model bindable extensions. Enhance the ListBox , ComboBox data binding.Jint - Javascript Interpreter for .NET: Jint - 0.9.0: New CLR interoperability features Many bugfixesBuild Version Increment Add-In Visual Studio: Build Version Increment v2.4.11046.2045: v2.4.11046.2045 Fixes and/or Improvements:Major: Added complete support for VC projects including .vcxproj & .vcproj. All padding issues fixed. A project's assembly versions are only changed if the project has been modified. Minor Order of versioning style values is now according to their respective positions in the attributes i.e. Major, Minor, Build, Revision. Fixed issue with global variable storage with some projects. Fixed issue where if a project item's file does not exist, a ...Coding4Fun Tools: Coding4Fun.Phone.Toolkit v1.1: Coding4Fun.Phone.Toolkit v1.1 release. Bug fixes and minor feature requests addedTV4Home - The all-in-one TV solution!: 0.1.0.0 Preview: This is the beta preview release of the TV4Home software.Finestra Virtual Desktops: 1.2: Fixes a few minor issues with 1.1 including the broken per-desktop backgrounds Further improves the speed of switching desktops A few UI performance improvements Added donations linksNuGet: NuGet 1.1: NuGet is a free, open source developer focused package management system for the .NET platform intent on simplifying the process of incorporating third party libraries into a .NET application during development. This release is a Visual Studio 2010 extension and contains the the Package Manager Console and the Add Package Dialog. The URL to the package OData feed is: http://go.microsoft.com/fwlink/?LinkID=206669 To see the list of issues fixed in this release, visit this our issues listEnhSim: EnhSim 2.4.0: 2.4.0This release supports WoW patch 4.06 at level 85 To use this release, you must have the Microsoft Visual C++ 2010 Redistributable Package installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=A7B7A05E-6DE6-4D3A-A423-37BF0912DB84 To use the GUI you must have the .NET 4.0 Framework installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=9cfb2d51-5ff4-4491-b0e5-b386f32c0992 Changes since 2.3.0 - Upd...Sterling Isolated Storage Database with LINQ for Silverlight and Windows Phone 7: Sterling OODB v1.0: Note: use this changeset to download the source example that has been extended to show database generation, backup, and restore in the desktop example. Welcome to the Sterling 1.0 RTM. This version is not backwards-compatible with previous versions of Sterling. Sterling is also available via NuGet. This product has been used and tested in many applications and contains a full suite of unit tests. You can refer to the User's Guide for complete documentation, and use the unit tests as guide...PDF Rider: PDF Rider 0.5.1: Changes from the previous version * Use dynamic layout to better fit text in other languages * Includes French and Spanish localizations Prerequisites * Microsoft Windows Operating Systems (XP - Vista - 7) * Microsoft .NET Framework 3.5 runtime * A PDF rendering software (i.e. Adobe Reader) that can be opened inside Internet Explorer. Installation instructionsChoose one of the following methods: 1. Download and run the "pdfRider0.5.1-setup.exe" (reccomended) 2. Down...Snoop, the WPF Spy Utility: Snoop 2.6.1: This release is a bug fixing release. Most importantly, issues have been seen around WPF 4.0 applications not always showing up in the app chooser. Hopefully, they are fixed now. I thought this issue warranted a minor release since more and more people are going WPF 4.0 and I don't want anyone to have any problems. Dan Hanan also contributes again with several usability features. Thanks Dan! Happy Snooping! p.s. By request, I am also attaching a .zip file ... so that people can install it ...New ProjectsAlpe d'HuZes: This project contains the source for Alpe d'HuZes, an organization that fights the cancer disease, by giving people the chance to ride the Alpe d'Hues, a mountain in france. By climbing this mountain, money is collected which is entirely donated to the "kankerbestrijding".AstroLib: Astronomical libraryDevon: Devon_Projectearthquake predictor: This project is attempt to create earthquake prediction application , which can help save lives. It is based on theory that number of lost pets, before earthquake, growing up. This statistics can be obtained from free news papers, boards, forums... Technology : C#, ASPX, .NET 4FCNS.Calendar: FCNS.Calendar ??? MonoCalendar(?????????) ?.NET??????????,??????Mac???????????????iCal?????。???????????.??????????.?????????????????.????????????????. FlashRelease [O-GO.ru edition]: FlashRelease it's a tool for easy create description of new video\music\game torrent releases. Developed in Delphi.Forms based authentication for SharePoint2010: Forms based authentication Management features for SharePoint 2010. <a href="http://www.softwarediscipline.com/post/2011/01/03/Forms-based-authentication-feature-SharePoint-2010.aspx" alt="SharePoint 2010 FBA management feature">SharePoint 2010 FBA feature</a>ITune.LittleTools: Reads your ITune library and copy your track rating in ITune on to your file in windows.MingleProject: Just a simple project that showcases the power of ASP.NET and Visual StudioMvcContrib UI Extensions - Themed Grid & Menu: UI Extensions to the MvcContrib Project - Themed Grid & MenuNDOS Azure: Windows Azure projects developed at the "Open Source and Interoperability Development Nucleous" (http://ndos.codeplex.com/) at Universidade Federal do Rio Grande do Sul (http://www.ufrgs.br).ObjectDumper: ObjectDumper takes a normal .NET object and dumps it to a string, TextWriter, or file. Handy for debugging purposes.Open Analytic: Open Analytic is an open source business intelligence framework that believes in simplicity. The framework is developed with .NET language and can be easily integrated in custom product development.Pomodoro.Oob Expression Blend Example App: This is a non-functional Silverlight 4 Out-of-Browser app to demonstrate functionality in Expression Blend and to accompany user group talks and presentations on Blend.Senior Design: Uconn senior design project!Service Monitors - A services health monitoring tool: The idea behind this project is simple, I want to know when a service related to my application is not available. Our first intent to get a tool to generete the necessary data to be compliant with the Availability SLA of our systems. SGO: OrganizerSina Weibo QReminder: A handy utility that display remind message in the browser title for sina weibo.System Center Configuration Manager Integration Pack Extention: This integration pack adds some additional integration points for Opalis to System Center Configuration Manager. These functions are used in my User Self imaging workflow that will be demoed at MMS 2011.TwittaFox: TwittaFox ist ein kleiner Twitter-Client welcher direkt aus dem Tray angesprochen werden kann.Ultralight Markup: Ultralight Markup makes it easier for webmasters to allow safe user comments. It features a stripped-down intermediate markup language meant to bridge the gap between text entry and HTML. And the project includes an ASP.NET MVC implementation with a Javascript editor.Unit Conversion Library: Unit Conversion Library is a .Net 2.0 based library, containing static methods for all the Units Set present in Windows 7 calculator. "Angle", "Area", "Energy", "Length", "Power", "Pressure", "Temperature",Time", "Velocity", "Volume", "Weight/Mass".UTB-PFIII-TermProj-Team DeLaFuente, Vasquez, Morales, Dartez: This is the group project for UTB-PFIII Team project. Authors include David De La Fuente, Louis Dartez, Juan Vasquez and Froylan Morales.Version History to InfoPath Custom List Form: The ability to add a button to view the version history of an item when the display form is modified in InfoPath allows a user easy access to view versioning information. Out of the box, SharePoint does not allow this ability. This is a sandboxed solution.WeatherCN - ????: WeatherCN - ????WinformsPOCMVP: This is a simple, and small proof of concept for the Model View Presenter UI design pattern with C# WinForms.worldbestwebsites: Customer Connecting Websites A website development for customer connecting

    Read the article

  • GLSL Atmospheric Scattering Issue

    - by mtf1200
    I am attempting to use Sean O'Neil's shaders to accomplish atmospheric scattering. For now I am just using SkyFromSpace and GroundFromSpace. The atmosphere works fine but the planet itself is just a giant dark sphere with a white blotch that follows the camera. I think the problem might rest in the "v3Attenuation" variable as when this is removed the sphere is show (albeit without scattering). Here is the vertex shader. Thanks for the time! uniform mat4 g_WorldViewProjectionMatrix; uniform mat4 g_WorldMatrix; uniform vec3 m_v3CameraPos; // The camera's current position uniform vec3 m_v3LightPos; // The direction vector to the light source uniform vec3 m_v3InvWavelength; // 1 / pow(wavelength, 4) for the red, green, and blue channels uniform float m_fCameraHeight; // The camera's current height uniform float m_fCameraHeight2; // fCameraHeight^2 uniform float m_fOuterRadius; // The outer (atmosphere) radius uniform float m_fOuterRadius2; // fOuterRadius^2 uniform float m_fInnerRadius; // The inner (planetary) radius uniform float m_fInnerRadius2; // fInnerRadius^2 uniform float m_fKrESun; // Kr * ESun uniform float m_fKmESun; // Km * ESun uniform float m_fKr4PI; // Kr * 4 * PI uniform float m_fKm4PI; // Km * 4 * PI uniform float m_fScale; // 1 / (fOuterRadius - fInnerRadius) uniform float m_fScaleDepth; // The scale depth (i.e. the altitude at which the atmosphere's average density is found) uniform float m_fScaleOverScaleDepth; // fScale / fScaleDepth attribute vec4 inPosition; vec3 v3ELightPos = vec3(g_WorldMatrix * vec4(m_v3LightPos, 1.0)); vec3 v3ECameraPos= vec3(g_WorldMatrix * vec4(m_v3CameraPos, 1.0)); const int nSamples = 2; const float fSamples = 2.0; varying vec4 color; float scale(float fCos) { float x = 1.0 - fCos; return m_fScaleDepth * exp(-0.00287 + x*(0.459 + x*(3.83 + x*(-6.80 + x*5.25)))); } void main(void) { gl_Position = g_WorldViewProjectionMatrix * inPosition; // Get the ray from the camera to the vertex and its length (which is the far point of the ray passing through the atmosphere) vec3 v3Pos = vec3(g_WorldMatrix * inPosition); vec3 v3Ray = v3Pos - v3ECameraPos; float fFar = length(v3Ray); v3Ray /= fFar; // Calculate the closest intersection of the ray with the outer atmosphere (which is the near point of the ray passing through the atmosphere) float B = 2.0 * dot(m_v3CameraPos, v3Ray); float C = m_fCameraHeight2 - m_fOuterRadius2; float fDet = max(0.0, B*B - 4.0 * C); float fNear = 0.5 * (-B - sqrt(fDet)); // Calculate the ray's starting position, then calculate its scattering offset vec3 v3Start = m_v3CameraPos + v3Ray * fNear; fFar -= fNear; float fDepth = exp((m_fInnerRadius - m_fOuterRadius) / m_fScaleDepth); float fCameraAngle = dot(-v3Ray, v3Pos) / fFar; float fLightAngle = dot(v3ELightPos, v3Pos) / fFar; float fCameraScale = scale(fCameraAngle); float fLightScale = scale(fLightAngle); float fCameraOffset = fDepth*fCameraScale; float fTemp = (fLightScale + fCameraScale); // Initialize the scattering loop variables float fSampleLength = fFar / fSamples; float fScaledLength = fSampleLength * m_fScale; vec3 v3SampleRay = v3Ray * fSampleLength; vec3 v3SamplePoint = v3Start + v3SampleRay * 0.5; // Now loop through the sample rays vec3 v3FrontColor = vec3(0.0, 0.0, 0.0); vec3 v3Attenuate; for(int i=0; i<nSamples; i++) { float fHeight = length(v3SamplePoint); float fDepth = exp(m_fScaleOverScaleDepth * (m_fInnerRadius - fHeight)); float fScatter = fDepth*fTemp - fCameraOffset; v3Attenuate = exp(-fScatter * (m_v3InvWavelength * m_fKr4PI + m_fKm4PI)); v3FrontColor += v3Attenuate * (fDepth * fScaledLength); v3SamplePoint += v3SampleRay; } vec3 first = v3FrontColor * (m_v3InvWavelength * m_fKrESun + m_fKmESun); vec3 secondary = v3Attenuate; color = vec4((first + vec3(0.25,0.25,0.25) * secondary), 1.0); // ^^ that color is passed to the frag shader and is used as the gl_FragColor } Here is also an image of the problem image

    Read the article

  • Real-Time Strategy Gameplay

    - by Ahmad Alkhawaja
    I am working on building a HTML5 RTS game, and my current state is that I am building the Campaign mode of the game, and want to define the gameplay (The Scoring, Unit Behaviors/Attributes). I am searching for links/articles/books about how to define the gameplay, for me this: The scoring Figuring out levels of control (in any RTS game, there is units, individuals and squads) Unit action/attributes/properties point timing (how long it will take to play?) Achievements ..etc I want to see how they usually define these areas in RTS games, I expect to see general document discussing this concept that I can use to build the gameplay. Any idea? Is my question clear or I need to provide more details?

    Read the article

  • Arithmetic Coding Questions

    - by Xophmeister
    I have been reading up on arithmetic coding and, while I understand how it works, all the guides and instructions I've read start with something like: Set up your intervals based upon the frequency of symbols in your data; i.e., more likely symbols get proportionally larger intervals. My main query is, once I have encoded my data, presumably I also need to include this statistical model with the encoding, otherwise the compressed data can't be decoded. Is that correct? I don't see this mentioned anywhere -- the most I've seen is that you need to include the number of iterations (i.e., encoded symbols) -- but unless I'm missing something, this also seems necessary to me. If this is true, that will obviously add an overhead to the final output. At what point does this outweigh the benefits of compression (e.g., say if I'm trying to compress just a few thousand bits)? Will the choice of symbol size also make a significant difference (e.g., if I'm looking at 2-bit words, rather than full octets/whatever)?

    Read the article

  • How do you create a .XNB Font file for use with CocosSharp?

    - by Chris Pietschmann
    I know the .XNB file format is an XNA thing and that CocosSharp inherits this from its MonoGame roots. However there doesn't seem to be any information on how to create your own .XNB fonts to use with CocosSharp. I've tried searching but can find any information. Could someone explain it here or point me to a tutorial on how to create .XNB font file for use with CocosSharp? A site to download already compiled .XNB Fonts would also be acceptable. Update: Another thing that makes this tricky is that I guess XNA Game Studio could be used, but it's not compatible with Windows 8.1; which is what I currently use for my dev machine...

    Read the article

  • Clockwork: A 40,000 Piece K’Nex Ball Machine [Video]

    - by Jason Fitzpatrick
    You may have built a simple marble raceway out of construction toys like LEGO or K’Nex at some point in your life. No matter how grand a raceway it was, we can assure you it had nothing on this 40,000 piece room-sized monster. The creator, Austron, writes: This is Clockwork, my fifth major K’nex ball machine, and my largest and most complex K’nex structure to date. It took 8 months to build, has over 40,000 pieces, over 450 feet of track, 21 different paths, 8 motors, 5 lifts, and a one-of-a-kind computer-controlled crane, as well as two computer-controlled illuminated K’nex balls. For a more in-depth look at the construction we suggest checking out both his YouTube channel and his build blog. [via Make] How to Get Pro Features in Windows Home Versions with Third Party Tools HTG Explains: Is ReadyBoost Worth Using? HTG Explains: What The Windows Event Viewer Is and How You Can Use It

    Read the article

  • How to prevent Google Website Optimizer from making Google Analytics spike Direct Traffic and lower Bounce Rate?

    - by Scott
    I am using Google Website Optimizer (GWO) and Google Analytics. Whenever a person (Google Website Optimizer) does a javascript redirect, Google Analytics will change the referrer. When the referrer changes, the traffic source becomes yourself and is changed to Direct Traffic. For Example: A visitor goes to google: searches for my great service. Clicks the link that goes to website page: /home/ At this point, Google Analytics tracks the referrer as Google. However, /home/ has a GWO javacript redirect to a battery of A/B tests. /home-1/ or /home-2/ or /home-3/ When the redirect from /home/ occurs to /home-1/, google analytics on the /home-1/ page now thinks the referrer is yourself and converts the referrer to Direct Traffic since the Direct Traffic bucket is the unknown. I'm really surprised that GWO and GA do this when they both come from google. Now, How does one fix this to prevent the overwrite of the referrer using GWO?

    Read the article

  • ASP.NET WebAPI Security 2: Identity Architecture

    - by Your DisplayName here!
    Pedro has beaten me to the punch with a detailed post (and diagram) about the WebAPI hosting architecture. So go read his post first, then come back so we can have a closer look at what that means for security. The first important takeaway is that WebAPI is hosting independent-  currently it ships with two host integration implementations – one for ASP.NET (aka web host) and WCF (aka self host). Pedro nicely shows the integration into the web host. Self hosting is not done yet so we will mainly focus on the web hosting case and I will point out security related differences when they exist. The interesting part for security (amongst other things of course) is the HttpControllerHandler (see Pedro’s diagram) – this is where the host specific representation of an HTTP request gets converted to the WebAPI abstraction (called HttpRequestMessage). The ConvertRequest method does the following: Create a new HttpRequestMessage. Copy URI, method and headers from the HttpContext. Copies HttpContext.User to the Properties<string, object> dictionary on the HttpRequestMessage. The key used for that can be found on HttpPropertyKeys.UserPrincipalKey (which resolves to “MS_UserPrincipal”). So the consequence is that WebAPI receives whatever IPrincipal has been set by the ASP.NET pipeline (in the web hosting case). Common questions are: Are there situations where is property does not get set? Not in ASP.NET – the DefaultAuthenticationModule in the HTTP pipeline makes sure HttpContext.User (and Thread.CurrentPrincipal – more on that later) are always set. Either to some authenticated user – or to an anonymous principal. This may be different in other hosting environments (again more on that later). Why so generic? Keep in mind that WebAPI is hosting independent and may run on a host that materializes identity completely different compared to ASP.NET (or .NET in general). This gives them a way to evolve the system in the future. How does WebAPI code retrieve the current client identity? HttpRequestMessage has an extension method called GetUserPrincipal() which returns the property as an IPrincipal. A quick look at self hosting shows that the moral equivalent of HttpControllerHandler.ConvertRequest() is HttpSelfHostServer.ProcessRequestContext(). Here the principal property gets only set when the host is configured for Windows authentication (inconsisteny). Do I like that? Well – yes and no. Here are my thoughts: I like that it is very straightforward to let WebAPI inherit the client identity context of the host. This might not always be what you want – think of an ASP.NET app that consists of UI and APIs – the UI might use Forms authentication, the APIs token based authentication. So it would be good if the two parts would live in a separate security world. It makes total sense to have this generic hand off point for identity between the host and WebAPI. It also makes total sense for WebAPI plumbing code (especially handlers) to use the WebAPI specific identity abstraction. But – c’mon we are running on .NET. And the way .NET represents identity is via IPrincipal/IIdentity. That’s what every .NET developer on this planet is used to. So I would like to see a User property of type IPrincipal on ApiController. I don’t like the fact that Thread.CurrentPrincipal is not populated. T.CP is a well established pattern as a one stop shop to retrieve client identity on .NET.  That makes a lot of sense – even if the name is misleading at best. There might be existing library code you want to call from WebAPI that makes use of T.CP (e.g. PrincipalPermission, or a simple .Name or .IsInRole()). Having the client identity as an ambient property is useful for code that does not have access to the current HTTP request (for calling GetUserPrincipal()). I don’t like the fact that that the client identity conversion from host to WebAPI is inconsistent. This makes writing security plumbing code harder. I think the logic should always be: If the host has a client identity representation, copy it. If not, set an anonymous principal on the request message. Btw – please don’t annoy me with the “but T.CP is static, and static is bad for testing” chant. T.CP is a getter/setter and, in fact I find it beneficial to be able to set different security contexts in unit tests before calling in some logic. And, in case you have wondered – T.CP is indeed thread static (and the name comes from a time where a logical operation was bound to a thread – which is not true anymore). But all thread creation APIs in .NET actually copy T.CP to the new thread they create. This is the case since .NET 2.0 and is certainly an improvement compared to how Win32 does things. So to sum it up: The host plumbing copies the host client identity to WebAPI (this is not perfect yet, but will surely be improved). or in other words: The current WebAPI bits don’t ship with any authentication plumbing, but solely use whatever authentication (and thus client identity) is set up by the host. WebAPI developers can retrieve the client identity from the HttpRequestMessage. Hopefully my proposed changes around T.CP and the User property on ApiController will be added. In the next post, I will detail how to add WebAPI specific authentication support, e.g. for Basic Authentication and tokens. This includes integrating the notion of claims based identity. After that we will look at the built-in authorization bits and how to improve them as well. Stay tuned.

    Read the article

  • Critical Patch Update for April 2010 Now Available

    - by Steven Chan
    The Critical Patch Update (CPU) for April 2010 was released on April 13, 2010. Oracle strongly recommends applying the patches as soon as possible.The Critical Patch Update Advisory is the starting point for relevant information. It includes a list of products affected, pointers to obtain the patches, a summary of the security vulnerabilities, and links to other important documents.Supported Products that are not listed in the "Supported Products and Components Affected" Section of the advisory do not require new patches to be applied.Also, it is essential to review the Critical Patch Update supporting documentation referenced in the Advisory before applying patches, as this is where you can find important pertinent information.The Critical Patch Update Advisory is available at the following location:Oracle Technology NetworkThe next four Critical Patch Update release dates are:July 13, 2010October 12, 2010January 18, 2011April 19, 2011

    Read the article

  • SQL Saturday 43 in Redmond

    - by AjarnMark
    I attended my first SQLSaturday a couple of days ago, SQLSaturday #43 in Redmond (at Microsoft).  I got there really early, primarily because I forgot how fast I can get there from my home when nobody else is on the road.  On a weekday in rush hour traffic, that would have taken two hours to get there.  I gave myself 90 minutes, and actually got there in about 45.  Crazy! I made the mistake of going to the main Microsoft campus, but that’s not where the event was being held.  Instead it was in a big Microsoft conference center on the other side of the highway.  Fortunately, I had the address with me and quickly realized my mistake.  When I got back on track, I noticed that there were bright yellow signs out on the street corner that looked like they said they were for SOL Saturday, which actually was appropriate since it was the sunniest day around here in a long time. Since I was there so early, the registration was just getting setup, so I found Greg Larsen who was coordinating things and offered to help.  He put me to work with a group of people organizing the pre-printed raffle tickets and stuffing swag bags. I had never been to a SQLSaturday before this one, so I wasn’t exactly sure what to expect even though I have read about a few on some blogs.  It makes sense that each one will be a little bit different since they are almost completely volunteer driven, and the whole concept is still in its early stages.  I have been to the PASS Summit for the last several years, and was hoping for a smaller version of that.  Now, it’s not really fair to compare one free day of training run entirely by volunteers with a multi-day, $1,000+ event put on under the direction of a professional event management company.  But there are some parallels. At this SQLSaturday, there was no opening general session, just coffee and pastries in the common area / expo hallway and straight into the first group of sessions.  I don’t know if that was because there was no single room large enough to hold everyone, or for other reasons.  This worked out okay, but the organization guy in me would have preferred to have even a 15 minute welcome message from the organizers with a little overview of the day.  Even something as simple as, “Thanks to persons X, Y, and Z for helping put this together…Sessions will start in 20 minutes and are all in rooms down this hallway…the bathrooms are on the other side of the conference center…lunch today is pizza and we would like to thank sponsor Q for providing it.”  It doesn’t need to be much, certainly not a full-blown Keynote like at the PASS Summit, but something to use as a rallying point to pull everyone together and get the day off to an official start would be nice.  Again, there may have been logistical reasons why that was not feasible here.  I’m just putting out my thoughts for other SQLSaturday coordinators to consider. The event overall was great.  I believe that there were over 300 in attendance, and everything seemed to run smoothly.  At least from an attendee’s point of view where there was plenty of muffins in the morning and pizza in the afternoon, with plenty of pop to drink.  And hey, if you’ve got the food and drink covered, a lot of other stuff could go wrong and people will be very forgiving.  But as I said, everything appeared to run pretty smoothly, at least until Buck Woody showed up in his Oracle shirt.  Other than that, the volunteers did a great job! I was a little surprised by how few people in my own backyard that I know.  It makes sense if you really think about it, given how many companies must be using SQL Server around here.  I guess I just got spoiled coming into the PASS Summit with a few contacts that I already knew would be there.  Perhaps I have been spending too much time with too few people at the Summits and I need to step out and meet more folks.  Of course, it also is different since the Summit is the big national event and a number of the folks I know are spread out across the country, so the Summit is the only time we’re all in the same place at the same time.  I did make a few new contacts at SQLSaturday, and bumped into a couple of people that I knew (and a couple others that I only knew from Twitter, and didn’t even realize that they were here in the area). Other than the sheer entertainment value of Buck Woody’s session, the one that was probably the greatest value for me was a quick introduction to PowerShell.  I have not done anything with it yet, but I think it will be a good tool to use to implement my plans for automated database recovery testing.  I saw just enough at the session to take away some of the intimidation factor, and I am getting ready to jump in and see what I can put together in the next few weeks.  And that right there made the investment worthwhile.  So I encourage you, if you have the opportunity to go to a SQLSaturday event near you, go for it!

    Read the article

  • « Linux a échoué sur le Desktop » pour le créateur de GNOME, un avis tranché qui divise la communauté open source

    « Linux a échoué sur le Desktop » pour le créateur de GNOME un avis tranché qui divise la communauté open source Miguel De Icaza, l'un des créateurs et meneur du développement de l'environnement de bureau libre pour Linux GNOME estime dans un article que « Linux est un échec en tant qu'OS grand public ». Un point de vue qui n'a pas manqué de créer une grosse polémique dans le monde de l'open source, entrainant des critiques acerbes de la part de Linus Torvalds. Déjà connu pour son franc-parler et son gout pour la polémique, Miguel De Icaza dans un long billet de blog intitulé « ce qui a tué le noyau Linux », fustige la communauté Linux et les choix de développement de celle...

    Read the article

  • SEO consequences for merging country sites in a .com

    - by Pekka
    I am in the process of refactoring a number of rental portals I've built for a company with locations in Austria, Germany, Switzerland, and the Netherlands. Instead of the current setting of each country site running under its own domain name: www.companyname.de www.companyname.ch www.companyname.at I would love to merge them all in this way: www.companyname.com/de www.companyname.com/ch www.companyname.com/at with the country TLDs doing a 301 redirect to the respective .com address. However, I have been repeatedly told not to do this due to likely problems with SEO - the business is very SEO dependent, and being a rental chain, needs to be strong in local results. So the question is: Is there an unavoidable hit in Search Engine Optimization when redirecting to a central .com domain? What measures can be taken to soften the blow? What comes to my mind is explicitly specifying a lang attribute in the html tag. Are there any other ways to specifically point out geographical location for sub-directories?

    Read the article

  • Winners of Pete Brown's "Silverlight 5 In Action" Books

    - by Dave Campbell
    It's always a double-edged sword when I get to this point in a give-away... I want to give everyone something, but a deal is a deal :) It's also only through the benevolence of the folks at Manning Press that I can even do this, so thank you! The Winners Getting right to it, the winners are: Jaganadh G Stephen Owens Jan Hannemann Notice there are 3 names, not 2... I was told late last week to pick a 3rd name, so thanks again Manning! I've already received email from my contact, and they've been waiting for me to send them the email. You should be hearing from them shortly I think. For everyone else, keep your eyes on my blog... as I told Manning, I like giving away other people's stuff :) Have a great day, and if you're anywhere near Phoenix and interested in Silverlight, I'll see you tomorrow at the Scott Gu Event, and Stay in the 'Light!

    Read the article

  • Is osTicket secure/private enough

    - by Andy
    I was going to use osTicket as my 'help desk' for my website, however I just got a little bit concerned when I realised that the clients' login details to see their support tickets are only their email address and a ticket ID. I am probably going over the top with security though, which is why I wanted to get some second opinions on how secure osTicket actually is and whether I should use it with my website. I run a software company, so chances are licence keys may be included in support tickets which are obviously sensitive information and valuable - so I want to ensure that the likelihood of a support ticket being hacked is very low. If there is any plugins/additions to make osTicket more 'secure', I would appreciate it if you could point me to them. Otherwise if there are any more free, more suited, help desk softwares out there please let me know. Thanks in advance

    Read the article

  • OWB 11gR2: Migration and Upgrade Paths from Previous Versions

    - by antonio romero
    Over the next several months, we expect widespread adoption of OWB 11gR2, both for its new features and because it is the only release of Warehouse Builder certified for use with database 11gR2. Customers seeking to move existing environments to OWB 11gR2 should review the new whitepaper, OWB 11.2: Upgrade and Migration Paths. This whitepaper covers the following topics: The difference between upgrade and migration, and how to choose between them An outline of how to perform each process When and where intermediate upgrade steps are required Tips for upgrading an existing environment to 11gR2 without having to regenerate and redeploy code to your production environment. Moving up from 10gR2 and 11gR1 is generally straightforward. For customers still using OWB 9 or 10.1, it is generally possible to move an entire environment forward complete with design and runtime audit metadata, but the upgrade process can be complex and may require intermediate processing using OWB 10.2 or OWB 11.1. Moving a design by itself is much simpler, though it requires regeneration and redeployment. Relevant details are provided in the whitepaper, so if you are planning an upgrade at some point soon, definitely start there.

    Read the article

  • What do you do to get your software design robust, flexible and clear?

    - by Oscar
    I am still getting mature as a software engineering/designer/architect, as you may want to call. At this point in time, I am getting small projects, private projects and so on. What I noticed is that even though I think about the SW structure, design some diagrams, have they really clear in my mind when I start coding, at the end, my software is not flexible and clear as I would like to. I would like to ask you what kind of approaches, mechanisms or even tricks do you use, to get your software (and SW design) flexible, robust and clear (easy to understand and use). So.... Any ideas to give to a beginner?

    Read the article

  • To maximize chances of functional programming employment

    - by Rob Agar
    Given that the future of programming is functional, at some point in the nearish future I want to be paid to code in a functional language, preferably Haskell. Assuming I have a firm grasp of the language, plus all the basic programmer attributes (good communication skills/sense of humour/hygiene etc), what should I concentrate on learning to maximize my chances? Are there any particularly sought after libraries I should know? Alternatively, would another language be a better bet, say F#? (I'm not too fussed about the kind of programming work, so long as it's reasonably interesting and reasonably well paid, and with nice people)

    Read the article

  • How can I reorient the axes of an object?

    - by d3vid
    I spent some time in Unity yesterday trying to fire a sphere from a horizontal cylinder (like a ball from a cannon). I was using Vector3.forward, but the sphere kept coming out the top of the cylinder rather than the front. Someone suggested using Vector3.up instead, and sure enough it worked! The cylinder is vertical by default. So, it appears that when I rotated the cylinder by 90 degrees to lay it flat, the local axes remained the same. The relative front of the cylinder remained at the same point, so when I fired the sphere it shot out the new "top", not what looked to me like the "front". If I had happened to be facing the other way, I would have had to fire at Vector3.down instead. How can I reorient/reset the axes of an object so that they match my expectations? (And if I can't, how can I tell by looking which way an object is oriented?)

    Read the article

< Previous Page | 411 412 413 414 415 416 417 418 419 420 421 422  | Next Page >