Search Results

Search found 20321 results on 813 pages for 'mobile applications'.

Page 622/813 | < Previous Page | 618 619 620 621 622 623 624 625 626 627 628 629  | Next Page >

  • Announcing Oracle Receivables Generic Data Fix (GDF) for Refunds

    - by user793553
    Here's the first of what will be a series of Generic Data Fixes (GDF) to be released by Receivables Development. Generic Data Fix (GDF) are created by development to fix data issues caused by bugs/issues in the application code.  Other Generic Data Fix benefits/features include: Developed for bugs that can cause data issues. Provides a SELECT script that uses an identification/signature query to identify and report all data affected by issue/condition caused by a bug. Allow customers to view and modify what will be fixed. Provides a separate FIX script to fix the data reported by the SELECT script. The FIX script creates backup tables for the data that is fixed/updated. Available on My Oracle Support for download In Release 12, when creating a refund by either of the following methods: Applied a receipt to the Refund activity - which creates an Invoice in Payables Or you went directly into Payables to create a refund for an open Credit Memo in Receivables The Invoice in Payables that is associated to the refund is cancelled, the corresponding refund application or credit memo in Receivables is not properly re-instated. For the receipt application, it still remains applied to the Refund whereas this should be automatically unapplied. For the credit memo, it stays closed instead of getting re-opened. Doc ID 761993.1 includes the patch to make sure this doesn’t happen in the future as well as a GDF script to fix the current data (Script name: ar_std_refund_unapp.sql).  Download the script and run in READ_ONLY_MODE to identify 'refund' applications with this problem. Stay tuned for more GDF scripts coming soon...

    Read the article

  • Blurry printed raster images with Brother MFC-8840D

    - by Adam Monsen
    (NOTE: crossposted here: ubuntuforums.org/showthread.php?t=1621795) I've got a Brother MFC-8840D. Works great with Ubuntu server! Setting up a CUPS print server was pretty straightforward, and I also finally got network scanning working reliably with saned. Printing documents and Web pages works well: fonts are crisp/clear, etc. One issue has got me completely vexed: printing raster (ie: JPG) images. They are blurry. For example, I can scan a page of black and white text at 150 or 300 dpi. The grayscale image looks perfect on my monitor. But the printed version is much blurrier than the original, regardless of the "print resolution" dpi I choose. As a counterexample, if I use the "copy" function of the MFC-8840D, the copy looks excellent, and this function is much, much faster than if I scan then print a scan of same. I've googled around a bunch and tried different tricks (printing a PDF with the image from evince, printing with Gimp, EOG and other applications) but I just can't print anything that looks as good as a copy made with the MFC-8840D. Any ideas? I'm using Ubuntu 10.04.1 LTS server. I'm using the PPD file from solutions.brother.com. Thanks, -Adam

    Read the article

  • Synaptics touchpad problem when disabling it and then enabling it

    - by CYREX
    My girlfriend has an HP dv6000. In ubuntu 10.10 32bit i use the synaptic on it and all is good but when i disable it and enable it the problem starts. when i press the disable button in the synaptics touchpad it disables the mouse AND the keyboard. After enabled the Keyboard keys and Mouse clicks do not work. If i click on the panel below, for example the Applications, Places or System buttons the focus gets stuck there forever. I can open nautilus by clicking on it but i can not use the menus, the ALT+F2 function, see the wireless connections, lower the sound through the panel, etc.. Here comes the weird part. If i press CTRL+ALT+F1 (or any other tty for that matter) and then come back to CTRL+ALT+F7 where the gui is everything works perfect again. This started about a week ago but she told me right now. i checked dmesg which is for sometime now throwing some warnings about Skipping EDID probe do to cached edid but for what i could find out this did not create the problem in the start. NOTE: I do not need to login when i do CTRL+ALT+F1 i just need to change to another tty then come back to F7. What could be causing this problem?

    Read the article

  • Single device that can work as tablet and as desktop PC

    - by flow
    I have a macbook pro and an iPad 2. I tried to use iPad 2 for work when I am out of the office but this is not very comfortable since one has to sync documents, not all Os X Lion apps exist for iPad 2 and so on. I think a solution would be a single tablet device that at the office can be connected to some dock station and then I have the equivalent of my macbook, but when I am out of the office I can touch the screen and used the same applications while on the road. As far as I know the iPad 2 does not and will not allow this. Therefore I wonder if you could recommend me another hardware/tablet/etc that could work this way now as of March 2012.

    Read the article

  • Setting up 2 external monitors on a laptop with VGA splitter

    - by mike
    I have a laptop with a graphics card that supports 2 displays. I would like to know the easiest way to set it up so I can close my laptop lid and use 2 external monitors (unique displays). I use it primarily for office applications and video and want a quality, clear picture. The laptop has 1 VGA port and I have 2 24" 1920x1200 monitors that have VGA and DVI ports. So a few questions: Can I just use a VGA splitter? (seen mixed feedback on this) Would it a VGA to 2 DVI splitter give a better picture quality? (if it exists) Would I be better upgrading laptop to one with 2 digital ports ( I just see a lot with VGA and HDMI though) specs: Model: Toshiba Satellite C675-S106 (Windows 7) Graphics Card: Intel HD Graphics 3000 (supports 2 displays) Processor: Intel Core i3-2350M

    Read the article

  • Seeking advice on tools and technology for my new game [closed]

    - by k.k. slider
    I'm a C# developer who has been programming a game in my spare time using XNA and Visual Studio. The game's logic is mostly done and I've completed a prototype that has most of the functionality of (what I envision to be) the final game. However, having heard about the uncertain future and (possibly) limited audience for XNA games, I'm looking to switch platforms... but I don't know what technology would best suit my needs. Below are some specifics about my game and what exactly I'm looking for, if you're interested: The game is a 2D turn-based tactical RPG (strategy game) for two players. It is a basic sprite and tile based game with animations and sound. 3D capabilities are not necessary. I'd like to allow players to compete with others online, and have a basic ranking/matchmaking system. I will probably need something that can interact with a server and a database (the game is turn-based and has no RNG, so cheating would be easy to detect even if most computation is done client-side and minimal data is sent to the server). Ideally, I would be able to release an early version of the game and have people give feedback as I develop additional features (similar to Minecraft). I'd prefer to have a way to release periodic updates to the game instead of releasing an absolute final product. To reach the widest possible audience, I'd prefer technology that allows me to release on PC, Android, iOS, and (maybe) Mac. This is a game with simple mouse inputs which can fit on a mobile touch screen. The game should be monetizable. If I find success with this game, then I may consider becoming a full-time indie game developer. I have several other game ideas and have learned quite a bit from my first attempt at game development. My first thought was an F2P/microtransaction model, but I'm open to other suggestions. Language isn't a primary concern of mine, since I have a decent amount of experience using several languages to program large projects. I'm willing to spend money (e.g. on a developer's license), but the more expensive it gets, the more hesitant I am to use it. I've looked into the following solutions... there are a LOT of tools out there... if anyone has experience with any of these and would like to recommend/reject any of them, it would be helpful. C#/.NET (XNA/MonoGame/SDL/SlimDX/Xamarin/ExEn/ANX?) HTML5/JS (AppMobi/PhoneGap/Marmalade/FlashCanvas/Cordova/libRocket?) Python (Pyglet/Pygame/Kivy?) Java (JavaFX/libGDX?) Unity/Construct 2/Cocos2D/NME/Corona/other game creation software? I'd like something that can do 2D and isn't limited by being too high-level. Other languages (Lua/LOVE? Moai?) Thanks for answering this rather long and tedious question...

    Read the article

  • Ubuntuone promting that my account is full, but its not....

    - by Andreas
    My Ubuntuone is prompting that my account is full. It has done that for over a week now, but its the account is not full at all... I have tried this guide: 1 down vote Can you please try the following: Quit the Ubuntu One Preferences, if open Open (Lucid): Applications-Accessories-Passwords and Encryption Keys (Maverick): System - Preferences - Password and Encryption Keys Click on the arrow next to "Passwords" Right-click on the Ubuntu One token and select "Delete" Go to https://one.ubuntu.com/account/machines/ Click on the checkbox next to your computer Click the "Remove selected computers" button (Maverick): killall ubuntu-sso-login; u1sdtool -q; u1sdtool -c (Lucid): u1sdtool -q; killall ubuntuone-login; u1sdtool -c a web page, if in Lucid, or a window, in Maverick, should open,prompting you to add your computer to your Ubuntu One account Add your computer This guide did not change any thing and i still get prompted that my account is full every time something is syncing. I also tried to create and connect to a new account butt still... the new account was doing the same. So I am now relay confused, pleas help!

    Read the article

  • CPU Affinity on ARM processors

    - by dsljanus
    I am using some RaspberryPI boards for a data acquisition system. They are nice boards, with plenty of community support around them, but they are really slow. I am thinking of gradually replacing them with ODROID multicore boards, with the Samsung Exynos processors. I have some experience using taskset to set CPU affinity on my servers because I am always running Node.js applications that are by definition single threaded. Now, is it possible to do this on an ARM board? I do not see why it would not in theory, but I have doubts over how well it is going to work. Does anyone have experience with this kind of hack? Also, I would appreciate any comments about ARM CPUs and how they differ from x86.

    Read the article

  • Application toolkits like QT versus traditional game/multimedia libraries like SFML

    - by Aaron
    I currently intend to use SFML for my next game project. I'll need a substantial GUI though (RPG/strategy-type) so I'll either have to implement my own or try to find an appropriate third party library, which seem to boil down to CEGUI, libRocket, and GWEN. At the same time, I do not anticipate doing that many advanced graphical effects. My game will be 2D and primarily sprite-based with some sprite animations. I've recently discovered that QT applications can have their appearance styled so that they don't have to look like plain OS apps. Given that, I am beginning to consider QT a valid alternative to SFML. I wouldn't have to implement the GUI functionality I'd need, and I may not be taking advantage of SFML's lower-level access anyway. The only drawbacks I can think of immediately are the learning curve for QT and figuring out how to fit game logic inside such a framework after getting used to the input/update/render loop of traditional game libraries. When would an application toolkit like QT be more appropriate for a game than a traditional game or multimedia library like SFML?

    Read the article

  • can the remote app rd web access be accessed from my local system

    - by shiva
    I am new to the remote app remote desktop access. I can access the application that i have published from my server using the link FQDN\rdweb. But on trying to access the same url from my local system(outside the server domain, say from my home pc) i get a not found error. Is there anything that i need to change in my local system to be able to access the remote applications? Or is it that for accessing webapps i need to be logged into the server? Please help me understand this

    Read the article

  • ArchBeat Link-o-Rama for October 18, 2013

    - by OTN ArchBeat
    Enriching XMLType data using relational data – XQuery and fn:collection in action | Lucas Jellema Another detailed technical post from the always prolific Lucas Jellema. Evil Behind ChangeEventPolicy PPR in CRUD ADF 12c and WebLogic Stuck Threads | Andrejus Baranovskis The latest post from Oracle ACE Director Andrejus Baranovskis is a bit of a preview of his presentation at the upcoming UKOUG 2013 event. Podcast: Interview with authors of "Hudson Continuous Integration in Practice" For your listening pleasure... Here's an Oracle Author Podcast Interview with "Hudson Continuous Integration in Practice" authors Ed Burns and Winston Prakash. Manual Recovery Mechanisms in SOA Suite and AIA | Shreenidhi Raghuram Solution architect Shreenidhi Raghuram's post combines information from several sources to provide "a quick reference for Manual Recovery of Faults within the SOA and AIA contexts." Event: Harnessing Oracle Weblogic and Oracle Coherence This OTN Virtual Developer Day event features eight sessions in two tracks, with presentations and hands-on labs for developers and architects delivered by experts in Weblogic, Coherence, and ADF. Registration is free. November 5th, 2013. 9am-1pm PT / 12pm-4pm ET / 1pm-5pm BRT Podcast: IoT Challenges and Opportunities - Part 2 Part 2 of the OTN ArchBeat Internet of Things podcast features a roundtable discussion of IoT challenges: massive data streams, security and privacy issues, evolving standards and protocols. Listen! Video: Design - ADF Architectural Patterns - Two for One Deal | Chris Muir Chris Muir explores the reuse of BTF workspaces across multiple applications and the advantages and disadvantages of reuse at the application level. Thought for the Day "Can't nothing make your life work if you ain't the architect." — Terry McMillan, American author (Born October 18, 1951) Source: brainyquote.com

    Read the article

  • Ubuntu 13.04 under Parallels Desktop - Black Desktop after X Windows Update

    - by Bob Reckhow
    I have been running Ubuntu 13.04 successfully on a MacBook Pro in a virtual machine in Parallels Desktop 9. Today (2013-10-17) after applying today's Ubuntu update, which included updates to X Windows, my Ubuntu 13.04 virtual machine launches, the launcher comes up, but the screen background is solid black, rather than the shaded orange colour of the default desktop background (and my desktop icons are "hidden behind this blackness", as well). I can launch applications from the launcher, and there is a very brief white flash on the screen, and then it returns to black. It's as if there is a "black blanket" covering the entire screen, so there is no way to interact with any application windows using the keyboard or mouse. The icons of the launcher are responsive to the mouse, so I can right-click and quit any application I have launched. But the rest of the screen is non-responsive to keyboard or mouse. This same behaviour happens with two different versions of Parallels Tools, so I am quite sure this is not a Parallels problem per se, although I could believe that it could be a paroblem with the interface between Parallels and this new updated X Windows code. Could anyone tell me what has happened, and how I might be able to fix this problem, so I can continue to use my Ubuntu 13.04 virtual machine? (I do have the option of reverting to a previous version of my virtual machine from before this update, but if possible I would prefer to keep my version of Ubuntu 13.04 up to date with the latest updates.) Thanks, Bob

    Read the article

  • Moving all UI logic to Client Side?

    - by Mag20
    Our team originally consisted of mostly server side developers with minimum expertise in Javascript. In ASP.NET we used to write a lot of UI logic in code-behind or more recently through controllers in MVC. A little while ago 2 high level client side developers joined our team. They can do in HTMl/CSS/Javascript pretty much anything that we could previously do with server-side code and server-side web controls: Show/hide controls Do validation Control AJAX refreshing So I started to think that maybe it would be more efficient to just create a high level API around our business logic, kinda like Amazon Fulfillment API: http://docs.amazonwebservices.com/fws/latest/APIReference/, so that client side developers would fully take over the UI, while server side developers would only concentrate on business logic. So for ordering system you would have a high level API like: OrderService.asmx CreateOrderResponse CreateOrder(CreateOrderRequest) AddOrderItem AddPayment - SubmitPayment - GetOrderByID FindOrdersByCriteria ... There would be JSON/REST access to API, so it would be easy to consume from client-side UI. We could use this API for both internal UI development and also for 3-rd parties to create their own applications. With advances in Javascript and availability of good client side developers, is it a good time to get rid of code-behind/controllers and just concentrate on developing high level APIs (ala Amazon) that client side developers can consume?

    Read the article

  • Windows 8: SL and HTML

    - by xamlnotes
    I  was just pointed to comment on my friend Andrew Brust’s blog about Silverlight versus HTML 5. Andrews blog is here: http://geekswithblogs.net/andrewbrust/archive/2011/11/23/windows-8-will-be-here-tomorrow-but-should-silverlight-be.aspx#600915 You can get another idea from another friend of mine Billy Hollis here: http://geekswithblogs.net/jalexander/archive/2011/04/09/the-eternal-battle-rich-v.-reachhellip--guest-blogger-billy-hollis.aspx The commenter is raving about HTML 5 and how that’s the future and SL is not. Well, my reaction is “hogwash”. Sure, HTML 5 is important and does some interesting stuff. Checkout what Bing.com is doing with it on some days and you can see. But to say that XAML is dead is nuts. I have been wrapping up bugs on a cross browser version of an application for awhile now. Whats the state of cross browser today? Well, better than a few years ago but far from perfect.  Each browser vendor interprets the specs in a little different way and you must account for them. The worst offender for major browsers? Apple and its Safari.  I had to make more changes for it than any other. Whats that got to do with XAML and SL/WPF?  Well, you write your SL code once and it runs in all browsers that support it, no changes. ipad does not? Well, they should be taken to court and forced too just like MS and others have been in the past for locking out competitors. Line of business applications? Write them in SL or WPF or both.  Use the power of XAML witch far out reaches html in any flavor and move on. We do need HTML 5 but its not a panacea nor will it replace all other technologies.

    Read the article

  • Creating a Corporate Data Hub

    - by BuckWoody
    The Windows Azure Marketplace has a rich assortment of data and software offerings for you to use – a type of Software as a Service (SaaS) for IT workers, not necessarily for end-users. Among those offerings is the “Data Hub” – a  codename for a project that ironically actually does what the codename says. In many of our organizations, we have multiple data quality issues. Finding data is one problem, but finding it just once is often a bigger problem. Lots of departments and even individuals have stored the same data more than once, and in some cases, made changes to one of the copies. It’s difficult to know which location or version of the data is authoritative. Then there’s the problem of accessing the data. It’s fairly straightforward to publish a database, share or other location internally to store the data. But then you have to figure out who owns it, how it is controlled, and pass out the various connection strings to those who want to use it. And then you need to figure out how to let folks access the internal data externally – bringing up all kinds of security issues. Finally, in many cases our user community wants us to combine data from the internally sources with external data, bringing up the security, strings, and exploration features up all over again. Enter the Data Hub. This is an online offering, where you assign an administrator and data stewards. You import the data into the service, and it’s available to you - and only you and your organization if you wish. The basic steps for this service are to set up the portal for your company, assign administrators and permissions, and then you assign data areas and import data into them. From there you make them discoverable, and then you have multiple options that you or your users can access that data. You’re then able, if you wish, to combine that data with other data in one location. So how does all that work? What about security? Is it really that easy? And can you really move the data definition off to the Subject Matter Experts (SME’s) that know the particular data stack better than the IT team does? Well, nothing good is easy – but using the Data Hub is actually pretty simple. I’ll give you a link in a moment where you can sign up and try this yourself. Once you sign up, you assign an administrator. From there you’ll create data areas, and then use a simple interface to bring the data in. All of this is done in a portal interface – nothing to install, configure, update or manage. After the data is entered in, and you’ve assigned meta-data to describe it, your users have multiple options to access it. They can simply use the portal – which actually has powerful visualizations you can use on any platform, even mobile phones or tablets.     Your users can also hit the data with Excel – which gives them ultimate flexibility for display, all while using an authoritative, single reference for the data. Since the service is online, they can do this wherever they are – given the proper authentication and permissions. You can also hit the service with simple API calls, like this one from C#: http://msdn.microsoft.com/en-us/library/hh921924  You can make HTTP calls instead of code, and the data can even be exposed as an OData Feed. As you can see, there are a lot of options. You can check out the offering here: http://www.microsoft.com/en-us/sqlazurelabs/labs/data-hub.aspx and you can read the documentation here: http://msdn.microsoft.com/en-us/library/hh921938

    Read the article

  • Oracle E-Business Suite is Helping to Save Lives at the National Marrow Donor Program

    - by Di Seghposs
    To improve the management of its life-saving operations, the National Marrow Donor Program recently modernized its financial and procurement operations by upgrading to Oracle E-Business Suite 12.1.   As the global leader in bone marrow and umbilical cord blood transplants, the NMDP manages a complex ecosystem of donor, patient, hospital, and biological data. “Maintaining accurate data and having an efficient matching process is essential, particularly as our global database of bone marrow patients grows and donor lists expand,” says Bruce Schmaltz, director of finance/controller. “We rely on the Oracle E-Business Suite to ensure our procurement and financial management processes meet the highest standards, enabling our growing non-profit to work swiftly and efficiently to help improve and save lives.” As the non-profit organization and its registry grew larger, NMDP needed a modern platform to store and integrate its financial information and complicated procurement process. It selected Oracle E-Business Suite for its ability to fit seamlessly into NMDP’s enterprise architecture. NMDP initially implemented Oracle E-Business Suite release 12 by leveraging Oracle Business Accelerators, which are rapid implementation tools and templates that help reduce implementation time and costs. With Oracle Financial Management and Oracle Procurement, NMDP has streamlined back-office processes and integrated its procure-to-pay business processes by leveraging industry leading accounts payable, accounts receivable, and general ledger modules. NMDP is currently rolling out Oracle Hyperion Performance Management applications and plans to implement Oracle Order Management and Oracle Advanced Pricing by the end of 2012. Read more details about NMDP’s modernization efforts.  For more updates on Oracle Financial Management Solutions, view our November 2012 Oracle Information InDepth Financial Management newsletter. Subscribe Now. 

    Read the article

  • Publishing a game -- any way to target both WP7 and Win8 Store?

    - by Rei Miyasaka
    I'm at a dilemma which seems should soon become an important issue for a lot of developers. If I build a game in XNA, I won't be able to publish it on the Windows 8 Store, as it would be a classic application -- and classic applications can't be sold on the store. If I build a game in Metro DirectX, I would be able to sell it on the Store, but porting it to Windows Phone would involve porting it to Reach XNA, which in fact would likely involve more effort even than porting to OS X or Android -- both of which support C++. Of all the WinRT API that is supported on C++/JS/.NET, DirectX can only be programmed from C++. It's also unlikely that Microsoft will update Windows 7 or Vista to support the new DirectX features, although that would make the Metro DirectX the first new version of DirectX to stop supporting the immediate predecessor OS. If I build a game in Pre-Win8 DirectX 9/10/11, I won't be able to sell it on the Windows Store or Windows Phone, but I could sell it on something like Steam. It would also involve the most amount of manual plumbing. In fact, DirectWrite, despite being part of DirectX 11, doesn't talk to Direct3D. I'm getting really tired of all these restrictions -- artificial and otherwise -- and I'm coming to a point where I'm considering switching to a platform with a less fragmented API, like Android or Mac/iOS. As far as bringing a game into market goes, excluding the actual market share of any platforms that I might consider, what other factors would help me in making a decision? Just a few years ago this question was a lot easier to answer: if you were primarily concerned with Windows platforms, all you had to answer was whether you wanted DirectX, XNA, or something like SlimDX. If you made the wrong decision, no biggie -- all you really would have lost is XBox and the fairly small Windows Phone market.

    Read the article

  • Towards Ultra-Reusability for ADF - Adaptive Bindings

    - by Duncan Mills
    The task flow mechanism embodies one of the key value propositions of the ADF Framework, it's primary contribution being the componentization of your applications and implicitly the introduction of a re-use culture, particularly in large applications. However, what if we could do more? How could we make task flows even more re-usable than they are today? Well one great technique is to take advantage of a feature that is already present in the framework, a feature which I will call, for want of a better name, "adaptive bindings". What's an adaptive binding? well consider a simple use case.  I have several screens within my application which display tabular data which are all essentially identical, the only difference is that they happen to be based on different data collections (View Objects, Bean collections, whatever) , and have a different set of columns. Apart from that, however, they happen to be identical; same toolbar, same key functions and so on. So wouldn't it be nice if I could have a single parametrized task flow to represent that type of UI and reuse it? Hold on you say, great idea, however, to do that we'd run into problems. Each different collection that I want to display needs different entries in the pageDef file and: I want to continue to use the ADF Bindings mechanism rather than dropping back to passing the whole collection into the taskflow   If I do use bindings, there is no way I want to have to declare iterators and tree bindings for every possible collection that I might want the flow to handle  Ah, joy! I reply, no need to panic, you can just use adaptive bindings. Defining an Adaptive Binding  It's easiest to explain with a simple before and after use case.  Here's a basic pageDef definition for our familiar Departments table.  <executables> <iterator Binds="DepartmentsView1" DataControl="HRAppModuleDataControl" RangeSize="25"             id="DepartmentsView1Iterator"/> </executables> <bindings> <tree IterBinding="DepartmentsView1Iterator" id="DepartmentsView1">   <nodeDefinition DefName="oracle.demo.model.vo.DepartmentsView" Name="DepartmentsView10">     <AttrNames>       <Item Value="DepartmentId"/>         <Item Value="DepartmentName"/>         <Item Value="ManagerId"/>         <Item Value="LocationId"/>       </AttrNames>     </nodeDefinition> </tree> </bindings>  Here's the adaptive version: <executables> <iterator Binds="${pageFlowScope.voName}" DataControl="HRAppModuleDataControl" RangeSize="25"             id="TableSourceIterator"/> </executables> <bindings> <tree IterBinding="TableSourceIterator" id="GenericView"> <nodeDefinition Name="GenericViewNode"/> </tree> </bindings>  You'll notice three changes here.   Most importantly, you'll see that the hard-coded View Object name  that formally populated the iterator Binds attribute is gone and has been replaced by an expression (${pageFlowScope.voName}). This of course, is key, you can see that we can pass a parameter to the task flow, telling it exactly what VO to instantiate to populate this table! I've changed the IDs of the iterator and the tree binding, simply to reflect that they are now re-usable The tree binding itself has simplified and the node definition is now empty.  Now what this effectively means is that the #{node} map exposed through the tree binding will expose every attribute of the underlying iterator's collection - neat! (kudos to Eugene Fedorenko at this point who reminded me that this was even possible in his excellent "deep dive" session at OpenWorld  this year) Using the adaptive binding in the UI Now we have a parametrized  binding we have to make changes in the UI as well, first of all to reflect the new ID that we've assigned to the binding (of course) but also to change the column list from being a fixed known list to being a generic metadata driven set: <af:table value="#{bindings.GenericView.collectionModel}" rows="#{bindings.GenericView.rangeSize}"         fetchSize="#{bindings.GenericView.rangeSize}"           emptyText="#{bindings.GenericView.viewable ? 'No data to display.' : 'Access Denied.'}"           var="row" rowBandingInterval="0"           selectedRowKeys="#{bindings.GenericView.collectionModel.selectedRow}"           selectionListener="#{bindings.GenericView.collectionModel.makeCurrent}"           rowSelection="single" id="t1"> <af:forEach items="#{bindings.GenericView.attributeDefs}" var="def">   <af:column headerText="#{bindings.GenericView.labels[def.name]}" sortable="true"            sortProperty="#{def.name}" id="c1">     <af:outputText value="#{row[def.name]}" id="ot1"/>     </af:column>   </af:forEach> </af:table> Of course you are not constrained to a simple read only table here.  It's a normal tree binding and iterator that you are using behind the scenes so you can do all the usual things, but you can see the value of using ADFBC as the back end model as you have the rich pantheon of UI hints to use to derive things like labels (and validators and converters...)  One Final Twist  To finish on a high note I wanted to point out that you can take this even further and achieve the ultra-reusability I promised. Here's the new version of the pageDef iterator, see if you can notice the subtle change? <iterator Binds="{pageFlowScope.voName}"  DataControl="${pageFlowScope.dataControlName}" RangeSize="25"           id="TableSourceIterator"/>  Yes, as well as parametrizing the collection (VO) name, we can also parametrize the name of the data control. So your task flow can graduate from being re-usable within an application to being truly generic. So if you have some really common patterns within your app you can wrap them up and reuse then across multiple developments without having to dictate data control names, or connection names. This also demonstrates the importance of interacting with data only via the binding layer APIs. If you keep any code in the task flow generic in that way you can deal with data from multiple types of data controls, not just one flavour. Enjoy!

    Read the article

  • What conventions or frameworks exist for MVVM in Perl?

    - by Will Sheppard
    We're using Catalyst to render lots of webforms in what will become a large application. I don't like the way all the form data is confusingly into a big hash in the Controller, before being passed to the template. It seems jumbled up and messy for the template. I'm sure there are real disadvantages that I haven't described properly... Are there? One solution is to just decide on a convention for the hash, e.g.: { defaults => { type => ['a', 'b', 'c'] }, input => { type => 'a' }, output => { message => "2 widgets found of type a", widgets => [ 'foo', 'bar' ] } } Another way is to store the page/form data as attributes in a class (a ViewModel?), and pass a whole object to the template, which it could use like this: <p class="message">[% model.message %]<p> [% FOREACH widget IN model.widgets %] Which way is more flexible for large applications? Are there any other solutions or existing Catalyst-compatible frameworks?

    Read the article

  • "This file came from another computer..." - how can I unblock all the files in a folder without having to unblock them individually?

    - by Schnapple
    Windows XP SP2 and Windows Vista have this deal where zone information is preserved in downloaded files to NTFS partitions, such that it blocks certain files in certain applications until you "unblock" the files. So for example if you download a zip file of source code to try something out, every file will display this in the security settings of the file properties "This file came from another computer and might be blocked to help protect this computer" Along with an "Unblock" button. Some programs don't care, but Visual Studio will refuse to load projects in solutions until they've been unblocked. While it's not terribly difficult to go to every project file and unblock it individually, it's a pain. And it does not appear you can unblock multiple selected files simultaneously. Is there any way to unblock all files in a directory without having to go to them all individually? I know you can turn this off globally for all new files but let's say I don't want to do that

    Read the article

  • Advice on refactoring PHP Project

    - by b0x
    I have a small SAS ERP that was written some years ago using PHP. At that time, it didn't use any framework, but the code isn't a mess. Nowadays, the project grows and I’m now working with 3 more programmers. Often, they ask to me why we don’t migrate to a framework such as Laravel. Although I'd love trying Laravel, I’m a small business and I don't have time nor money to stop and spend a whole year building everything from scratch. I need to live and pay the bills. So, I've read a lot about this matter, and I decided that doing a refactoring is the best way to do it. Also, I'm not so sure that a framework will make things easy. Business goals are: Make the code easier to new hired programmers Separate the "view", in order to: release different versions of this product (using the same code), but under different brands and websites at the minimum cost (just changing view) release different versions to fit mobile/tablet. Make different types of this product, selling packages as if they were plugins. Develop custom packages for some costumers (like plugins/addon's that they can buy to put on the main application). Code goals: Introduce best pratices, standards for everyone Try to build my own MVC structure Improve validation of data/forms (today they are mixed in both ajax and classes) Create automated testing routines for quality assurance. My current structure project: class\ extra\ hd\ logs\ public_html\ public_html\includes\ public_html\css|js|images\ class\ There are three types of classes. They are all “autoloaded” with something similar with PSR-0, but I don’t use namespaces. 1. class.Something.php Connects to Database using specific methods. I.e: Costumer-list(); It uses “class.Db.php”, that it’s an abstraction of mysql on every method. 2. class.SomethingProc.php Do things that “join” things that come from “class.Something.php”. Like IF/ELSE, math operations. 3. class.SomethingHTML.php The classes with “HTML” suffix implements only static methods and HTML code only. A real life example: All the programmers need to use $cSomething ($c to class) and $arrSomething (to array). Costumer.php (view) <?php $cCosumter = new Costumer(); $arrCostumer = $cCostumer->list(); echo CostumerHTML::table($arrCostumer); ?> Extra\ Store 3rdparty projects/classes from others, such MPDF, PHPMailer, etc. Hd\ Store user’s files outsite wwwroot dir. Logs\ Store phplogs and the system itself logs (We have a static Log::error() method, that we put in every method of every class) Public_html\ Stores the files that people use. Public_html\includes\ Store the main “config.php” file and all files that do “ajax things” ajax.Costumer.php, for example. Help is needed ;) So, as you can see we have some standards, and also for database things. But I want to write a manual of our rules. Something that I can give to any new programmer at my company and he can go on. This is not totally a mess, but it could be better seeing the new practices. What could I do to separate this as MVC, to have multiple views. Could you give me some tips considering my goals? Keep im mind the different products/custom things for specific costumers without breaking the main application. URL for tutorials, books, etc, would be nice.

    Read the article

  • A Cost Effective Solution to Securing Retail Data

    - by MichaelM-Oracle
    By Mike Wion, Director, Security Solutions, Oracle Consulting Services As so many noticed last holiday season, data breaches, especially those at major retailers, are now a significant risk that requires advance preparation. The need to secure data at all access points is now driven by an expanding privacy and regulatory environment coupled with an increasingly dangerous world of hackers, insider threats, organized crime, and other groups intent on stealing valuable data. This newly released Oracle whitepaper entitled Cost Effective Security Compliance with Oracle Database 12c outlines a powerful story related to a defense in depth, multi-layered, security model that includes preventive, detective, and administrative controls for data security. At Oracle Consulting Services (OCS), we help to alleviate the fears of massive data breach by providing expert services to assist our clients with the planning and deployment of Oracle’s Database Security solutions. With our deep expertise in Oracle Database Security, Oracle Consulting can help clients protect data with the security solutions they need to succeed with architecture/planning, implementation, and expert services; which, in turn, provide faster adoption and return on investment with Oracle solutions. On June 10th at 10:00AM PST , Larry Ellison will present an exclusive webcast entitled “The Future of Database Begins Soon”. In this webcast, Larry will launch the highly anticipated Oracle Database In-Memory technology that will make it possible to perform true real-time, ad-hoc, analytic queries on your organization’s business data as it exists at that moment and receive the results immediately. Imagine real-time analytics available across your existing Oracle applications! Click here to download the whitepaper entitled Cost Effective Security Compliance with Oracle Database 12c.

    Read the article

  • How do I (quickly) let people know that software I am providing for free is not abandon-ware?

    - by blueberryfields
    As an independent, individual programmer: How do I let people very quickly know that I have not abandoned the software I've written and given away for free? That I am putting in the effort required to maintain and support my software to a professional level? When software written by one or two developers is available for free, or marked as open-source, usually the default assumption is that it's abandon-ware. This is usually a safe assumption - check out the answers to this question if you doubt it: Why do programmers write applications and then make them free?. There are lots of programmers who provide free and/or open-source tools which are not abandon-ware, though. If we're talking about large companies, ie Google, there's no real problem telling the difference between supported, live tools and software, and those which are abandoned or discontinued. A lively git repository isn't quick - users will have to be savvy enough to understand the repository and know where to look for it. Consistent marketing and community management take more time and effort than I can put in on my own. Also, if my software becomes popular/successful, I assume those will grow on their own, and be supported by power users in the community.

    Read the article

  • Git for Application Settings

    - by devians
    I use a lot of tools at work and at home, and im constantly tweaking them in one location or the other. It's somewhat common practice for people to use Git to version their .vim, .vimrc, and other . files, since you can host your config files on github and have the share-ability and all the other advantages that implies. Being able to version and branch my configs sounds like a grand idea, since I'm always messing about with them. I'd like to discuss the best practice for doing this on a slightly wider scope. How would you implement it? Have your configfiles repo in ~/Library/Configs or similar, and symlink the appropriate files? How to handle preference files for Applications, ie iTerm2. These files are recreated every time, so you'd have to symlink 'backwards' and put a link in the repo? rather than symlinking to the repo, since it would just delete the symlink.

    Read the article

  • What is a preferred method for automatically configuring and setting up an Ubuntu instance?

    - by sutch
    I am tired of manually configuring instances of Ubuntu for testing web applications and for setting up workstations. I'm even more frustrated by the issues caused by inconsistent configurations. Is there a method (hopefully not too time consuming to learn and setup) that allows for automation of the setup and configuration of an Ubuntu server or workstation from an ISO. This is primarily for virtual machine instances, but it would be helpful to also create instances on hardware. I am specifically looking for a method to automate the installation of libraries (apt-get), configure services (such as Apache and MySQL), add 3rd party software (download, extract and build), and add libraries to scripting languages (for example, Ruby Gems or CPAN packages for Perl).

    Read the article

< Previous Page | 618 619 620 621 622 623 624 625 626 627 628 629  | Next Page >