Search Results

Search found 64995 results on 2600 pages for 'data import'.

Page 753/2600 | < Previous Page | 749 750 751 752 753 754 755 756 757 758 759 760  | Next Page >

  • Document-oriented vs Column-oriented database fit

    - by user1007922
    I have a data-intensive application that desperately needs a database make-over. The general data model: There are records with RIDs, grouped together by group IDs (GID). The records have arbitrary data fields, (maybe 5-15) with a few of them mandatory and the rest optional, and thus sparse. The general use model: There are LOTS and LOTS of Writes. Millions to Billions of records are stored. Very often, they are associated with new GIDs, but sometimes, they are associated with existing GIDs. There aren't as many reads, but when they happen, they need to be pretty fast or at least constant speed regardless of the database size. And when the reads happen, it will need to retrieve all the records/RIDs with a certain GID. I don't have a need to search by the record field values. Primarily, I will need to query by the GID and maybe RID. What database implementation should I use? I did some initial research between document-oriented and column-oriented databases and it seems the document-oriented ones are a good fit, model-wise. I could store all the records together under the same document key using the GID. But I don't really have any use for their ability to search the document contents itself. I like the simplicity and scalability of column-oriented databases like Cassandra, but how should I model my data in this paradigm for optimal performance? Should my key be the GID and should I create a column for each record/RID? (there maybe thousands or hundreds of thousands of records in a group/GID). Or should my key be the RID and ensure each row has a column for the GID value? What results in faster writes and reads under this model?

    Read the article

  • Loading Obj Files in Soya3d engine

    - by John Riselvato
    I recently just found soya3d and from what i have seen through the tutorials i will be able to make exactly what i wanted with python skills. Now i have built this map generator. The only issue is that i can not manage to understand from any documents how to load obj files. At first i figured that i had to convert it to a .data file, but i dont understand how to do this. I just want to load a simple model of a house. I tried using the soya_editor, but i can not figure out at all how to do anything with that. Heres my script so far: import sys, os, os.path, soya, soya.sdlconst width, height = 760, 375 soya.init("Generator 0.1", width, height) soya.path.append(os.path.join(os.path.dirname(sys.argv[0]), "data")) scene = soya.World() model = soya.model.get("house") light = soya.Light(scene) light.set_xyz(0.5, 0.0, 2.0) camera = soya.Camera(scene) camera.z = 2.0 soya.set_root_widget(camera) soya.MainLoop(scene).main_loop() house is in .obj form at folder data/models The error i get is: Traceback (most recent call last): File "introduction.py", line 7, in <module> model = soya.Model.get("house") File "/usr/lib/pymodules/python2.6/soya/__init__.py", line 259, in get return klass._alls.get(filename) or klass._alls.setdefault(filename, klass.load(filename)) File "/usr/lib/pymodules/python2.6/soya/__init__.py", line 268, in load dirname = klass._get_directory_for_loading_and_check_export(filename) File "/usr/lib/pymodules/python2.6/soya/__init__.py", line 194, in _get_directory_for_loading_and_check_export dirname = klass._get_directory_for_loading(filename, ext) File "/usr/lib/pymodules/python2.6/soya/__init__.py", line 171, in _get_directory_for_loading raise ValueError("Cannot find a %s named %s!" % (klass, filename)) ValueError: Cannot find a <class 'soya.Model'> named house! * Soya3D * Quit... So i am figuring that because i dont understand how to turn my files into .data files, i will need to learn that. So my question is, how do i use my own models?

    Read the article

  • TechEd 2012: Office, SharePoint And More Office

    - by Tim Murphy
    I haven’t spent any time looking at Office 365 up to this point.  I met Donovan Follette on the flight down from Chicago.  I also got to spend some time discussing the product offerings with him at the TechExpo and that sealed my decision to attend this session. The main actor of his presentation is the BCS – Business Connectivity Services. He explained that while this feature has existed in on-site SharePoint it is a valuable new addition to Office 365 SharePoint Online.  If you aren’t familiar with the BCS, it allows you to leverage non-SharePoint enterprise data source from SharePoint.  The greatest benefactor is the end users who can leverage the data using a variety of Office products and more.  The one thing I haven’t shaken my skepticism of is the use of SharePoint Designer which Donovan used to create a WCF service.  It is mostly my tendency to try to create solutions that can be managed through the whole application life cycle.  It the past migrating through test environments has been near impossible with anything other than content created by SharePiont Designer. There is a lot of end user power here.  The biggest consideration I think you need to examine when reaching from you enterprise LOB data stores out to an online service and back is that you are going to take a performance hit.  This means that you have to be very aware of how you configure these integrated self serve solutions.  As a rule make sure you are using the right tool for the right situation. I appreciated that he showed both no code and code solutions for the consumer of the LOB data.  I came out of this session much better informed about the possibilities around this product. del.icio.us Tags: Office 365,SharePoint Online,TechEd,TechEd 2012

    Read the article

  • Should I include HTML markup in my JSON response?

    - by Mike M. Lin
    In an e-commerce site, when adding an item to a cart, I'd like to show a popup window with the options you can choose. Imagine you're ordering an iPod Shuffle and now you have to choose the color and text to engrave. I'd like the window to be modal, so I'm using a lightbox populated by an Ajax call. Now I have two options: Option 1: Send only the data, and generate the HTML markup using JavaScript What's nice about this is that it trims down the Ajax request to the bear minimum and doesn't mix the data with the markup. What's not so great about this is that now I need to use JavaScript to do my rendering, instead of having a template engine on the server-side do it. I might be able to clean up the approach a bit by using a client-side templating solution. Option 2: Send the HTML markup What's good about this is that I can have the same server-side templating engine I'm using for the rest of my rendering tasks (Django), do the rendering of the lightbox. JavaScript is only used to insert the HTML fragment into the page. So it clearly leaves the rendering to the rendering engine. Makes sense to me. But I don't feel comfortable mixing data and markup in an Ajax call for some reason. I'm not sure what makes me feel uneasy about it. I mean, it's the same way every web page is served up -- data plus markup -- right?

    Read the article

  • VS2012 - How to manually convert .NET Class Library to a Portable Class Library

    - by Igor Milovanovic
    The portable libraries are the  response to the growing profile fragmentation in .NET frameworks. With help of portable libraries you can share code between different runtimes without dreadful #ifdef PLATFORM statements or even worse “Add as Link” source file sharing practices. If you have an existing .net class library which you would like to reference from a different runtime (e.g. you have a .NET Framework 4.5 library which you would like to reference from a Windows Store project), you can either create a new portable class library and move the classes there or edit the existing .csproj file and change the XML directly. The following example shows how to convert a .NET Framework 4.5 library to a Portable Class Library. First Unload the Project and change the following settings in the .csproj file: <Import Project="$(MSBuildToolsPath)\Microsoft.CSharp.targets" /> to: <Import Project="$(MSBuildExtensionsPath32)\Microsoft\Portable \$(TargetFrameworkVersion)\Microsoft.Portable.CSharp.targets" /> and add the following keys to the first property group in order to get visual studio to show the framework picker dialog: <ProjectTypeGuids>{786C830F-07A1-408B-BD7F-6EE04809D6DB}; {FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}</ProjectTypeGuids>   After that you can select the frameworks in the Library Tab of the Portable Library:   As last step, delete any framework references from the library as you have them already referenced via the .NET Portable Subset.     [1] Cross-Platform Development with the .NET Framework - http://msdn.microsoft.com/en-us/library/gg597391.aspx [2] Framework Profiles in .NET: http://nitoprograms.blogspot.de/2012/05/framework-profiles-in-net.html

    Read the article

  • WebCenter Spaces 11g PS2 Task Flow Customization

    - by Javier Ductor
    Previously, I wrote about Spaces Template Customization. In order to adapt Spaces to customers prototype, it was necessary to change template and skin, as well as the members task flow. In this entry, I describe how to customize this task flow.Default members portlet:Prototype Members Portlet:First thing to do, I downloaded SpacesTaskflowCustomizationApplication with its guide.This application allows developers to modify task flows in Spaces, such as Announcements, Discussions, Events, Members, etc. Before starting, some configuration is needed in jDeveloper, like changing role to 'Customization Developer' mode, although it is explained in the application guide. It is important to know that the way task flows are modified is through libraries, and they cannot be updated directly in the source code like templates, you must use the Structure panel for this. Steps to customize Members portlet:1. There are two members views: showIconicView and showListView. By default it is set to Iconic view, but in my case I preferred the View list, so I updated in table-of-members-taskflow.xml this default value.2. Change the TableOfMembers-ListView.jspx file. By editing this file, you can control the way this task flow is displayed. So I customized this list view using the structure panel to get the desired look&feel.3. After changes are made, click save all, because every time a library changes an xml file is generated with all modifications listed, and they must be saved.4. Rebuild project and deploy application.5. Open WLST command window and import this customization to MDS repository with the 'import' command.Eventually, this was the result:Other task flows can be customized in a similar way.

    Read the article

  • Adding dynamic business logic/business process checks to a system

    - by Jordan Reiter
    I'm wondering if there is a good extant pattern (language here is Python/Django but also interested on the more abstract level) for creating a business logic layer that can be created without coding. For example, suppose that a house rental should only be available during a specific time. A coder might create the following class: from bizlogic import rules, LogicRule from orders.models import Order class BeachHouseAvailable(LogicRule): def check(self, reservation): house = reservation.house_reserved if not (house.earliest_available < reservation.starts < house.latest_available ) raise RuleViolationWhen("Beach house is available only between %s and %s" % (house.earliest_available, house.latest_available)) return True rules.add(Order, BeachHouseAvailable, name="BeachHouse Available") This is fine, but I don't want to have to code something like this each time a new rule is needed. I'd like to create something dynamic, ideally something that can be stored in a database. The thing is, it would have to be flexible enough to encompass a wide variety of rules: avoiding duplicates/overlaps (to continue the example "You already have a reservation for this time/location") logic rules ("You can't rent a house to yourself", "This house is in a different place from your chosen destination") sanity tests ("You've set a rental price that's 10x the normal rate. Are you sure this is the right price?" Things like that. Before I recreate the wheel, I'm wondering if there are already methods out there for doing something like this.

    Read the article

  • VISIT ORACLE LINUX PAVILION @ORACLE OPENWORLD

    - by Zeynep Koch
    Back by popular demand, Oracle will again host the Oracle Linux Pavilionat Oracle OpenWorld from October 1-3. The pavilion will be located in the Exhibition Hall at Moscone South, Booth 1033, next to the Oracle DEMOgrounds and Oracle Linux demopods. At the pavilion a select group of ISVs, IHVs, and SIs will showcase their products that have been Oracle Linux- and/or Oracle VM-certified. These certified products enable customer applications to run faster, thereby saving money.Partners exhibiting their solutions in the Oracle Linux Pavilion include: BeyondTrust: context-aware security intelligence for dynamic IT infrastructures such as cloud, mobile, and virtual technologies Centrify: control, secure, and audit access to cross-platform systems, mobile devices, and applications Data Intensity: cloud services and application management Fujitsu: technology platforms, private cloud, services, ubiquitous and device solutions HP: converged cloud, converged infrastructure, application transformation, and information optimization LSI: intelligent solid-state storage solutions for breakthrough database acceleration Mellanox: InfiniBand and Ethernet end-to-end server and storage interconnect solutions and services for data centers Micro Focus: mainframe solutions, application modernization and development tools, software quality tools NetApp: storage and data management QLogic: high performance networking Teleran: BI and data warehouse management solutions for Oracle Exadata Database Machine and Oracle Database Be sure to pick up your free Oracle Linux and Oracle VM DVD Kit if you visit one of these partners. And speaking of free, be sure to stop by for some cool treats, courtesy of sponsor QLogic: Smoothie Bar on Monday, October 1 from 2:30 p.m. - 5:30p.m. Ice Cream Social on Wednesday, October 3 from 1:00 p.m. - 2:00 p.m. We look forward to seeing you at the pavilion.

    Read the article

  • Emacs: methods for debugging python

    - by Tom Willis
    I use emacs for all my code edit needs. Typically, I will use M-x compile to run my test runner which I would say gets me about 70% of what I need to do to keep the code on track however lately I've been wondering how it might be possible to use M-x pdb on occasions where it would be nice to hit a breakpoint and inspect things. In my googling I've found some things that suggest that this is useful/possible. However I have not managed to get it working in a way that I fully understand. I don't know if it's the combination of buildout + appengine that might be making it more difficult but when I try to do something like M-x pdb Run pdb (like this): /Users/twillis/projects/hydrant/bin/python /Users/twillis/bin/pdb /Users/twillis/projects/hydrant/bin/devappserver /Users/twillis/projects/hydrant/parts/hydrant-app/ Where .../bin/python is the interpreter buildout makes with the path set for all the eggs. ~/bin/pdb is a simple script to call into pdb.main using the current python interpreter HellooKitty:hydrant twillis$ cat ~/bin/pdb #! /usr/bin/env python if __name__ == "__main__": import sys sys.version_info import pdb pdb.main() HellooKitty:hydrant twillis$ .../bin/devappserver is the dev_appserver script that the buildout recipe makes for gae project and .../parts/hydrant-app is the path to the app.yaml I am first presented with a prompt Current directory is /Users/twillis/bin/ C-c C-f Nothing happens but HellooKitty:hydrant twillis$ ps aux | grep pdb twillis 469 100.0 1.6 168488 67188 s002 Rs+ 1:03PM 0:52.19 /usr/local/bin/python2.5 /Users/twillis/projects/hydrant/bin/python /Users/twillis/bin/pdb /Users/twillis/projects/hydrant/bin/devappserver /Users/twillis/projects/hydrant/parts/hydrant-app/ twillis 477 0.0 0.0 2435120 420 s000 R+ 1:05PM 0:00.00 grep pdb HellooKitty:hydrant twillis$ something is happening C-x [space] will report that a breakpoint has been set. But I can't manage to get get things going. It feels like I am missing something obvious here. Am I? So, is interactive debugging in emacs worthwhile? is interactive debugging a google appengine app possible? Any suggestions on how I might get this working?

    Read the article

  • Large invoice database structure and rendering

    - by user132624
    Our client has a MS SQL database that has 1 million customer invoice records in it. Using the database, our client wants its customers to be able to log into a frontend web site and then be able to view, modify and download their company’s invoices. Given the size of the database and the large number of customers who may log into the web site at any time, we are concerned about data base engine performance and web page invoice rendering performance. The 1 million invoice database is for just 90 days sales, so we will remove invoices over 90 days old from the database. Most of the invoices have multiple line items. We can easily convert our invoices into various data formats so for example it is easy for us to convert to and from SQL to XML with related schema and XSLT. Any data conversion would be done on another server so as not to burden the web interface server. We have tentatively decided to run the web site on a .NET Framework IIS web server using MS SQL on MS Azure. How would you suggest we structure our database for best performance? For example, should we put all the invoices of all customers located within the same 5 digit or 6 digit zip codes into the same table? Or could we set up a separate home directory for each customer on IIS and place each customer’s invoices in each customer’s home directory in XML format? And secondly what would you suggest would be the best method to render customer invoices on a web page and allow customers to modify for best performance? The ADO.net XML Data Set looks intriguing to us as a method, but we have never used it.

    Read the article

  • [MINI HOW-TO] Remove a Network Computer from Windows Home Server

    - by Mysticgeek
    One of the cool features of Windows Home Server is the ability to backup and monitor the computers on your network. If you no longer need a machine on to be monitored or backed up, here we show you how to remove it. Remove Computer from WHS The process if straight-forward and basic –Open Windows Home Server Console and click on Computers & Backup. Right-click on the computer that you no longer need and click Remove. You’ll be prompted to verify that you want to remove the machine and delete all of its backup data. Check the box I am sure I want to remove this computer then click the Remove button. That’s all there is to it! The computer and all of its backup data is removed. Remember that if you remove a computer, all of its backup data will be deleted as well. If you no longer have the computer, you probably don’t need the backed up data anyway, but you’ll want to be sure you no longer need it before removing it. Similar Articles Productive Geek Tips GMedia Blog: Setting Up a Windows Home ServerRestore Files from Backups on Windows Home ServerCreate A Windows Home Server Home Computer Restore DiscInstalling Windows Home ServerChange Ubuntu Server from DHCP to a Static IP Address TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Beware Hover Kitties Test Drive Mobile Phones Online With TryPhone Ben & Jerry’s Free Cone Day, 3/23/10 New Stinger from McAfee Helps Remove ‘FakeAlert’ Threats Google Apps Marketplace: Tools & Services For Google Apps Users Get News Quick and Precise With Newser

    Read the article

  • Silverlight Cream for May 29, 2010 -- #872

    - by Dave Campbell
    In this Issue: Michael Washington, Chris Koenig, Kunal Chowdhury, SilverLaw, Shayne Burgess, Ian T. Lackey, Alan Beasley, Marlon Grech. Shoutouts: Ozymandias has a post up that's not Silverlight necessarily, but it's pretty cool: Typeface Selection Flowchart Damian Schenkelman posted about the latest: Prism 2.2 Release available. Get it at Codeplex. From SilverlightCream.com: Silverlight 4 OData Paging with RX Extensions Michael Washington continues with this OData and Rx post using the View Model Style. Michael has some good external links, good info, and all the code. WP7 Part 4: Morphing and Mapping Chris Koenig has the 4th in his WP7 series he's doing, and this one is on MVVMLight and BingMaps ... code included. Silverlight 4: Interoperability with Excel using the COM Object Kunal Chowdhury has a post up about Excel Interoperability using the COM object including opening an Excel Workbook and writing data out, then modifying the data in the spreadsheet and seeing it updated in the app. Creating A Flexible Surface Effect – Silverlight 4 (Part 1) SilverLaw put up a demo of an awesome 'water ripple' SL4 demo a couple days ago, and now he's got part 1 of a great tutorial explaining it all. Service Operations and the WCF Data Services Client Shayne Burgess has a post up about Service Operations and how they can be used by the WCF Data Services client. Role Based Silverlight Behaviors Also from the Open Light Group, Ian T. Lackey has a post up about Behaviors that takes a list of roles and updates the UI appropriatetly. How to Toggle (Show/Hide) using Behaviours (Behaviors) between Visual States or Storyboards in Expression Blend for Windows Phone Alan Beasley has a quick post up talking about the solution he found to a problem he was having with state switching in a WP7 app. MEFedMVVM: Testability Marlon Grech has another MEFedMVVM post up and he's discussing Testability all rolled in there with everything else :) Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Lost in Translation

    - by antony.reynolds
    Using the Correct Character Set for the SOA Suite Database A couple of years ago I spent a wonderful week in Tel Aviv helping with the first Oracle BAM implementation in Israel.  Although everyone I interacted spoke better English than I did, the screens and data for the implementation were all in Hebrew, meaning the Hebrew alphabet.  Over the week I learnt to recognize a few Hebrew words, enough to enable me to test what we were doing.  So I knew SOA Suite worked OK with non-English and non-Latin character sets so I was suspicious recently when a customer was having data corruption of non-Latin characters.  On investigation it turned out that the data received correctly in the SOA Suite, but then it was corrupted after being stored in the database. A little investigation revealed that the customer was using the default database character set, which is “WE8ISO8859P1” which, as the name suggests only supports West European 8-bit characters.  What was happening was that when the customer had installed his SOA repository he had ignored the message that his database was not using AL32UTF as the character. After changing the character set on his database he no longer saw the corruption of non-English character data. So the moral of this story is Always install the SOA Repository in to an AL32UTF8 Database This is true for both SOA Suite 10g and 11g.  Ignore it at your peril, because you never know when you will need to support Hebrew, or Japanese or another multi-byte character set.

    Read the article

  • when to introduce an application services tier in an n-tier application

    - by user20358
    I am developing a web based application whose primary objective is to fetch data from the database, display it on the UI, take in user inputs and write them back to the database. The application is not going to be doing any industrial strength algorithm crunching, but will be receiving a very high number of hits at peak times (described below) which will be changing thru the day. The layers are your typical Presentation, Business, Data. The data layer is taken care of by the database server. The business layer will contain the DAL component to access the database server over tcp. The choices I have to separate these layers into tiers are: The presentation and business layers can be either kept on the same tier. The presentation layer on a separate tier by itself and the business layer on a separate tier by itself. In the case of choice 2, the business layer will be accessed by the presentation layer using a WCF service either over http or tcp. I don't see any heavy processing being done on the Business layer, so I am leaning towards option 1 above. I also feel for the same reason, adding a new tier will only introduce the network latency. However, in terms of scalability in case I need to scale up or scale out, which is a better way to go? This application will need to be able to support up to 6 million users an hour. There will be a reasonable amount of data in each user session, storing user's preferences and other details. I will be using page level caching as well.

    Read the article

  • Implementation details of database synchronisation API

    - by Daniel
    I want to achieve a database synchronisation between my server database and a client application. The server would run MySQL and the applications may run different database technologies, their implementation isn't important. I have a MySQL database online and web accessible via an API I wrote in PHP (just a detail). My client application ships with a copy of the online data. As time passes my goal is to check for any changes in the online database and make these updates available to the client app via an API call, by sending a date to an API endpoint corresponding to the last date the app was updated, the response would be a JSON filled with all new objects and updated objects, and delete IDs, this makes possible to update the local store appropriately. Essentially I want to do this: http://dbconvert.com/synchronization.php My question is about the implementation details. Would I need to add a column to my database tables with a "last modified" date? Since the client app could be very out of date if it's been offline for a long time, does that also mean I shouldn't delete data from the online database but instead have another column called "delete" set to 1 and a modified date updated appropriately? Would my SQL query simply check for all data with a modified date superior then the date passed into the API request by the client? I feel like there's a lot more to it then having a ton of dates everywhere. And also, worry that I will need to persist a lot of old data in order to ensure that old versions of the client app always have the opportunity to delete parts of their data when they are able to sync.

    Read the article

  • Oracle Linux Pavilion is Back for Oracle OpenWorld

    - by Oracle OpenWorld Blog Team
    By Zeynep Koch Back by popular demand, Oracle will again host the Oracle Linux Pavilion at Oracle OpenWorld from October 1-3. The pavilion will be located in the Exhibition Hall at Moscone South, Booth 1033, next to the Oracle DEMOgrounds and Oracle Linux demopods. At the pavilion a select group of ISVs, IHVs, and SIs will showcase their products that have been Oracle Linux- and/or Oracle VM-certified. These certified products enable customer applications to run faster, thereby saving money.Partners exhibiting their solutions in the Oracle Linux Pavilion include: BeyondTrust: context-aware security intelligence for dynamic IT infrastructures such as cloud, mobile, and virtual technologies Centrify: control, secure, and audit access to cross-platform systems, mobile devices, and applications Data Intensity: cloud services and application management Fujitsu: technology platforms, private cloud, services, ubiquitous and device solutions HP: converged cloud, converged infrastructure, application transformation, and information optimization LSI: intelligent solid-state storage solutions for breakthrough database acceleration Mellanox: InfiniBand and Ethernet end-to-end server and storage interconnect solutions and services for data centers Micro Focus: mainframe solutions, application modernization and development tools, software quality tools NetApp: storage and data management QLogic: high performance networking Teleran: BI and data warehouse management solutions for Oracle Exadata Database Machine and Oracle Database Be sure to pick up your free Oracle Linux and Oracle VM DVD Kit if you visit one of these partners. We look forward to seeing you at the pavilion.

    Read the article

  • What's the difference between General Ledger Transfer Program, Create Accounting and Submit Accounting?

    - by Oracle_EBS
    In Release 12, the General Ledger Transfer Program is no longer used. Use Create Accounting or Submit Accounting instead. Submit Accounting spawns the Revenue Recognition Process. The Create Accounting program does not. So if you create transactions with rules, then you would want to run Submit Accounting Process to spawn Revenue Recognition to create the distribution rows, which Create Accounting is then spawned to process to the GL. Create Accounting Submit Accounting Short Name for Concurrent Program XLAACCPB ARACCPB Specific to Receivables No Yes Runs Revenue Recognition automatically No Yes Can be run real-time for one Transaction/Receipt at a time Yes No Spawns the following Programs 1) XLAACCPB module: Create Accounting 2) XLAACCUP module: Accounting Program 3) GLLEZL module: Journal Import 1) ARTERRPM module: Revenue Recognition Master Program 2) ARTERRPW module: Revenue Recognition with parallel workers - could be numerous 3) ARREVSWP - Revenue Contingency Analyzer 4) XLAACCPB module: Create Accounting 5) XLAACCUP module: Accounting Program 5) GLLEZL module: Journal Import Keep in mind, Reports owned by application 'Subledger Accounting' cannot be seen when running the report from Receivables responsibility. You may want to request your sysadmin to attach the following SLA reports/programs to your AR responsibility as you will need these for your AR closing process: XLAPEXRPT : Subledger Period Close Exception Report - shows transactions in status final, incomplete and unprocessed. XLAGLTRN : Transfer Journal Entries to GL - transfers transactions in final status and manually created transactions to GL To add reports/programs owned by application 'Subledger Accounting' (Subledger Period Close Exception Report and Transfer Journal Entries to GL_ Add to the request group as follows: Let's use Subledger Accounting Report XLATBRPT: Open Account Balances Listing Report as an example. Responsibility: System Administrator Navigation: Security > Responsibility > Define Query the name of your Receivables Responsibility and note the Request Group (ie. Receivables All) Navigation: Security > Responsibility > Request Query the Request Group Go to Request Zone and Click on Add Record Enter the following: Type: Program Name: Open Account Balances Listing Save Responsibility: Receivables Manager Navigation: Control > Requests > Run In the list of values you should now see 'Open Account Balances Listing' report References: Note: 748999.1 How to add reports for application subledger accounting to receivables responsibiilty Note: 759534.1 R12 ARGLTP General Ledger Transfer Program Errors Out Note: 1121944.1 Understanding and Troubleshooting Revenue Recognition in Oracle Receivables

    Read the article

  • Could Not Load Type Microsoft.Build.Framework.BuildEventContext

    Setting up a TeamCity build and got this error: C:\Program Files\MSBuild\Microsoft\VisualStudio\v9.0\TeamData\Microsoft.Data.Schema.SqlTasks.targets(80, 5): error MSB4018: The "SqlSetupDeployTask" task failed unexpectedly. System.TypeLoadException: Could not load type 'Microsoft.Build.Framework.BuildEventContext' from assembly 'Microsoft.Build.Framework, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a'. at Microsoft.Build.BuildEngine.TaskExecutionModule.SetBatchRequestSize() at Microsoft.Build.BuildEngine.TaskExecutionModule..ctor(EngineCallback engineCallback, TaskExecutionModuleMode moduleMode, Boolean profileExecution) at Microsoft.Build.BuildEngine.NodeManager..ctor(Int32 cpuCount, Boolean childMode, Engine parentEngine) at Microsoft.Build.BuildEngine.Engine..ctor(Int32 numberOfCpus, Boolean isChildNode, Int32 parentNodeId, String localNodeProviderParameters, BuildPropertyGroup globalProperties, ToolsetDefinitionLocations locations) at Microsoft.Build.BuildEngine.Engine.get_GlobalEngine() at Microsoft.Data.Schema.Build.DeploymentProjectBuilder.CreateDeploymentProject() at Microsoft.Data.Schema.Tasks.DBSetupDeployTask.BuildDeploymentProject(ErrorManager errors, ExtensionManager em) at Microsoft.Data.Schema.Tasks.DBSetupDeployTask.Execute() at Microsoft.Build.BuildEngine.TaskEngine.ExecuteTask(ExecutionMode howToExecuteTask, Hashtable projectItemsAvailableToTask, BuildPropertyGroup projectPropertiesAvailableToTask, Boolean& taskClassWasFound)   The usual searching didnt bring back anything useful, but I figured out that Id missed a dropdownlist in the TeamCity project setup: Originally I was using Microsoft .NET Framework 2.0 for my MSBuild task.  Changing it to 3.5 (as shown above) got me past this error (and on to the next one). Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Tkinter on Ubuntu 14.04 seems not to work

    - by empedokles
    I receive following Traceback: Traceback (most recent call last): File "tkinter_basic_frame.py", line 4, in <module> from Tkinter import Tk, Frame, BOTH File "/usr/lib/python2.7/lib-tk/Tkinter.py", line 42, in raise ImportError, str(msg) + ', please install the python-tk package' ImportError: No module named _tkinter, please install the python-tk package This is the demoscript I'm trying to run: #!/usr/bin/python # -*- coding: utf-8 -*- from Tkinter import Tk, Frame, BOTH class Example(Frame): def __init__(self, parent): Frame.__init__(self, parent, background="white") self.parent = parent self.initUI() def initUI(self): self.parent.title("Simple") self.pack(fill=BOTH, expand=1) def main(): root = Tk() root.geometry("250x150+300+300") app = Example(root) root.mainloop() if __name__ == '__main__': main() From my knowledge Tkinter should be included in Python 2.7. Why do I receive the traceback? Doesn't ubuntu contain the standard-python-distribution? This is solved. I had to install it manually in synaptic (got the hint in the meantime from another forum), see here: Wikipedia says: "Tkinter is a Python binding to the Tk GUI toolkit. It is the standard Python interface to the Tk GUI toolkit1 and is Python's de facto standard GUI,2 and is included with the standard Windows and Mac OS X install of Python." - Not good, that it isn't included in Ubuntu as well. Tkinter on Wikipedia

    Read the article

  • links for 2011-03-02

    - by Bob Rhubart
    Oracle Technology Network Architect Day: Denver Registration is now open. Sessions will cover IT Optimization and consolidation, cloud computing, the evolving role of enterprise IT, and more. (tags: oracle otn entarch event denver) SOA Suite Integration: Part 2: A basic BPEL process (The Shorten Spot) The latest post in Anthony's Shorten's series about SOA Suite integration with Oracle Utilities Application Framework. (tags: oracle otn soa bpel soasuite) ADF: How to create web service based ADF pages The first in promised series of three posts on the topic by Marianne Horsch. (tags: oracle soa webservices adf) David Butler: MDM Poised for Growth (Oracle Master Data Management) David says: "Businesses are talking about the need to fix master data before they can successfully move forward on SOA initiatives. And the growing demands for compliance continue to be a major driver." (tags: oracle otn mdm) Cloud governance is about more than security | The Pervasive Data Center - CNET News Legal and regulatory procedures, transparency, service levels, indemnification, and more are all part of a broader governance landscape that requires IT to work closely with business users. Read this blog post by Gordon Haff on The Pervasive Data Center. (tags: ping.fm) Senthilkumar Rajendran's Blog: Horizontal Scaling OBIEE 11g (tags: ping.fm) InfoQ: Searching Without Objectives Kenneth O. Stanley considers that innovation is stifled when we are strictly following a high goal, and we would progress more when we are inclined to discovery rather than following an objective. (tags: ping.fm) InfoQ: Brownfield Software - Industrial Waste or Business Fertilizer? Josh Graham addresses 10 myths related to working on legacy software, attempting to prove that one can make good use of legacy code without having to rewrite the entire thing. (tags: ping.fm)

    Read the article

  • ANNOUNCEMENT: Oracle VM 3 Templates Available for Oracle Secure Global Desktop 4.62

    - by Mohan Prabhala
    Today, we are proud to announce the general availability of Oracle VM 3 templates for Oracle Secure Global Desktop version 4.62.  With Oracle VM 3 templates, anyone using Oracle VM 3 need not download, install and configure the Operating System and product(s) individually. In this case, the supported operating system (Oracle Linux 5.7) and Oracle Secure Global Dekstop 4.62 product is packaged together into a template that one can easily import and clone as a VM into Oracle VM 3. This results in a nearly instant deployment and configuration of Oracle Secure Global Desktop within Oracle VM 3.  This means drastically reducing the evaluation and deployment time for Oracle Secure Global Desktop when leveraging Oracle VM 3. Feel free to give it a try! Login into the Oracle VM section at Oracle Software Delivery Cloud  (click on 'Cloud Portal (Main)' at the top-right) and: Under Oracle VM templates - x86 64-bit, look for Oracle VM 3 Template (OVF) for Oracle Secure Global Desktop Media Pack for x86_64 (64 bit) Oracle Secure Global Desktop 4.62 template for x86_64 (64 bit) with Oracle Linux 5.7 Under Oracle VM templates – x86 32 bit, look for Oracle VM 3 Template (OVF) for Oracle Secure Global Desktop Media Pack for x86 (32 bit) Oracle Secure Global Desktop 4.62 template for x86 (32 bit) with Oracle Linux 5.7 Download any of the above templates. Once you are done, you must First import the assembly (ova) file that you downloaded from Oracle Software Delivery Cloud Next, create a virtual machine template from the assembly And finally create a virtual machine from the template. Once the virtual machine is created and starts up, be sure to configure the networking parameters (hostname, IP address, netmask, gateway etc), and optional user parameters correctly. You must also enter a root password during first boot. And that's it - the Oracle Secure Global Desktop install script will pick up the networking parameters, prompt for confirmation and complete a default installation. Once the installation is complete, you may want to refer to the Oracle Secure Global Desktop Administration Guide to learn more about Oracle Secure Global Desktop and its capabilities.

    Read the article

  • Apache config file. Redirect permanent gives 403 error

    - by Homunculus Reticulli
    I am changing my domain from foo.com to foobar.org. I used a Redirect permanent in my apache config file, and then restarted apache. When I try to access the old domain foo.com, I get a 403 error. This is what my apache config file looks like: <VirtualHost *:80> ServerName foo.com #ServerAlias www.foo.com #ServerAdmin [email protected] Redirect permanent / http://www.foobar.org/ DocumentRoot /path/to/project/foo/web DirectoryIndex index.php # CustomLog with format nickname LogFormat "%h %l %u %t \"%r\" %>s %b" common CustomLog "|/usr/bin/cronolog /var/log/apache2/%Y%m.foo.access.log" common LogLevel notice ErrorLog "|/usr/bin/cronolog /var/log/apache2/%Y%m.foo.errors.log" <Directory /> Order Deny,Allow Deny from all </Directory> <Files ~ "^\.ht"> Order allow,deny Deny from all </Files> <Directory /path/to/project/foo/web> Options -Indexes -Includes AllowOverride All Allow from All RewriteEngine On # We check if the .html version is here (cacheing) RewriteRule ^$ index.html [QSA] RewriteRule ^([^.])$ $1.html [QSA] RewriteCond %{REQUEST_FILENAME} !-f # No, so we redirect to our front end controller RewriteRule ^(.*)$ index.php [QSA,L] </Directory> <Directory /path/to/project/foo/web/uploads> Options -ExecCGI -FollowSymLinks -Indexes -Includes AllowOverride None php_flag engine off </Directory> Alias /sf /lib/vendor/symfony/symfony-1.3.8/data/web/sf <Directory /lib/vendor/symfony/symfony-1.3.8/data/web/sf> # Alias /sf /lib/vendor/symfony/symfony-1.4.19/data/web/sf # <Directory /lib/vendor/symfony/symfony-1.4.19/data/web/sf> Options -Indexes -Includes AllowOverride All Allow from All </Directory> </VirtualHost> Can anyone spot what I may be doing wrong?. The site foobar.org does exist so I don't know why this error occurs - help?

    Read the article

  • Case Study: Polystar Improves Telecom Networks Performance with Embedded MySQL

    - by Bertrand Matthelié
    Polystar delivers and supports systems that increase the quality, revenue and customer satisfaction of telecommunication services. Headquarted in Sweden, Polystar helps operators worldwide including Telia, Tele2, Telekom Malysia and T-Mobile to monitor their network performance and improve service levels. Challenges Deliver complete turnkey solutions to customers integrating a database ensuring high performance at scale, while being very easy to use, manage and optimize. Enable the implementation of distributed architectures including one database per server while maintaining a low Total Cost of Ownership (TCO). Avoid growing database complexity as the volume of mobile data to monitor and analyze drastically increases. Solution Evaluation of several databases and selection of MySQL based on its high performance, manageability, and low TCO. The MySQL databases implemented within the Polystar solutions handle on average 3,000 to 5,000 transactions per second. Up to 50 million records are inserted every day in each database. Typical installations include between 50 and 100 MySQL databases, up to 300 for the largest ones. Data is then periodically aggregated, with the original records being overwritten, as the need for detailed information becomes unnecessary to operators after a few weeks. The exponential growth in mobile data traffic driven by the proliferation of smartphones and usage of social media requires ever more powerful solutions to monitor, analyze and turn network data into actionable business intelligence. With MySQL, Polystar can deliver powerful, yet easy to manage, solutions to its customers. MySQL-based Polystar solutions enable operators to monitor, manage and improve the service levels of their telecom networks in over a dozen countries from a single location. The new and innovative MySQL features constantly delivered by Oracle help ensure Polystar that it will be able to meet its customer’s needs as they evolve. “MySQL has been a great embedded database choice for us. It delivers the high performance we need while remaining very easy to use, manage and tune. Power and simplicity at its best.” Mats Söderlindh, COO at Polystar.

    Read the article

  • HTML5-MVC application using VS2010 SP1

    - by nmarun
    This is my first attempt at creating HTML5 pages. VS 2010 allows working with HTML5 now (you just need to make a small change after installing SP1). So my Razor view is now a HTML5 page. I call this application - 5Commerce – (an over-simplified) HTML5 ECommerce site. So here’s the flow of the application: home page renders user enters first and last name, chooses a product and the quantity can enter additional instructions for the order place the order user is then taken to another page showing the order details Off to the details. This is what my page looks in Google Chrome 10 beta (or later) soon after it renders. Here are some of the things to observe on this. Look a little closer and you’ll see a border around the first name textbox – this is ‘autofocus’ in action. I’ve set the autofocus attribute on this textbox. So as soon as the page loads, this control gets focus. 1: <input type="text" autofocus id="firstName" class="inputWidth" data_minlength="" 2: data_maxlength="" placeholder="first name" /> See a partially grayed out ‘last name’ text in the second textbox. This is set using a placeholder attribute (see above). It gets wiped out on-focus and improves the UI visuals in general. The quantity textbox is actually a numerical-only textbox. 1: <input type="number" id="quantity" data_mincount="" class="inputWidth" /> The last line is for additional instructions. This looks like a label but it’s content is editable. Just adding the ‘contenteditable’ attribute to the span allow the user to edit the text inside. 1: <span contenteditable id="additionalInstructions" data_texttype="" class="editableContent">select text and edit </span> All of the above is just plain HTML (no lurking javascript acting in here). Makes it real clean and simple. Going more into the HTML, I see that the _Layout.cshtml already is using some HTML5 content. I created my project before installing SP1, so that was the reason for my surprise. 1: <!DOCTYPE html> This is the doctype declaration in HTML5 and this is supported even by IE6 (just take my word on IE6 now, don’t go install it to test it, especially when MS is doing an IE6 countdown). That’s just amazing and extremely easy to read remember and talk about a few less bytes on every call! I modified the rest of my _Layout.cshtml to the below: 1: <!DOCTYPE html> 2: <html> 3: <head> 4: <title>5Commerce - HTML 5 Ecommerce site</title> 5: <link href="@Url.Content("~/Content/Site.css")" rel="stylesheet" type="text/css" /> 6: <script src="@Url.Content("~/Scripts/jquery-1.4.4.min.js")" type="text/javascript"></script> 7: <script src="@Url.Content("~/Scripts/CustomScripts.js")" type="text/javascript"></script> 8: <script type="text/javascript"> 9: $(document).ready(function () { 10: WireupEvents(); 11: }); 12:</script> 13:  14: </head> 15:  16: <body role="document" class="bodybackground"> 17: <header role="heading"> 18: <h2>5Commerce - HTML 5 Ecommerce site!</h2> 19: </header> 20: <section id="mainForm"> 21: @RenderBody() 22: </section> 23: <footer id="page_footer" role="siteBaseInfo"> 24: <p>&copy; 2011 5Commerce Inc!</p> 25: </footer> 26: </body> 27: </html> I’m sure you’re seeing some of the new tags here. To give a brief intro about them: <header>, <footer>: Marks the header/footer region of a page or section. <section>: A logical grouping of content role attribute: Identifies the responsibility of an element. This attribute can be used by screen readers and can also be filtered through jQuery. SP1 also allows for some intellisense in HTML5. You see the other types of input fields – email, date, datetime, month, url and there are others as well. So once my page loads, i.e., ‘on document ready’, I’m wiring up the events following the principles of unobtrusive javascript. In the snippet below, I’m controlling the behavior of the input controls for specific events. 1: $("#productList").bind('change blur', function () { 2: IsSelectedProductValid(); 3: }); 4:  5: $("#quantity").bind('blur', function () { 6: IsQuantityValid(); 7: }); 8:  9: $("#placeOrderButton").click( 10: function () { 11: if (IsPageValid()) { 12: LoadProducts(); 13: } 14: }); This enables some client-side validation to occur before the data is sent to the server. These validation constraints are obtained through a JSON call to the WCF service and are set to the ‘data_’ attributes of the input controls. Have a look at the ‘GetValidators()’ function below: 1: function GetValidators() { 2: // the post to your webservice or page 3: $.ajax({ 4: type: "GET", //GET or POST or PUT or DELETE verb 5: url: "http://localhost:14805/OrderService.svc/GetValidators", // Location of the service 6: data: "{}", //Data sent to server 7: contentType: "application/json; charset=utf-8", // content type sent to server 8: dataType: "json", //Expected data format from server 9: processdata: true, //True or False 10: success: function (result) {//On Successfull service call 11: if (result.length > 0) { 12: for (i = 0; i < result.length; i++) { 13: if (result[i].PropertyName == "FirstName") { 14: if (result[i].MinLength > 0) { 15: $("#firstName").attr("data_minLength", result[i].MinLength); 16: } 17: if (result[i].MaxLength > 0) { 18: $("#firstName").attr("data_maxLength", result[i].MaxLength); 19: } 20: } 21: else if (result[i].PropertyName == "LastName") { 22: if (result[i].MinLength > 0) { 23: $("#lastName").attr("data_minLength", result[i].MinLength); 24: } 25: if (result[i].MaxLength > 0) { 26: $("#lastName").attr("data_maxLength", result[i].MaxLength); 27: } 28: } 29: else if (result[i].PropertyName == "Quantity") { 30: if (result[i].MinCount > 0) { 31: $("#quantity").attr("data_minCount", result[i].MinCount); 32: } 33: } 34: else if (result[i].PropertyName == "AdditionalInstructions") { 35: if (result[i].TextType.length > 0) { 36: $("#additionalInstructions").attr("data_textType", result[i].TextType); 37: } 38: } 39: } 40: } 41: }, 42: error: function (result) {// When Service call fails 43: alert('Service call failed: ' + result.status + ' ' + result.statusText); 44: } 45: }); 46:  47: //.... 48: } Just before the GetValidators() function runs and sets the validation constraints, this is what the html looks like (seen through the Dev tools of Chrome): After the function executes, you see the values in the ‘data_’  attributes. As and when we enter valid data into these fields, the error messages disappear, since the validation is bound to the blur event of the control. There you see… no error messages (well, the catch here is that once you enter THAT name, all errors disappear automatically). Clicking on ‘Place Order!’ runs the SaveOrder function. You can see the JSON for the order object that is getting constructed and passed to the WCF Service. 1: function SaveOrder() { 2: var addlInstructionsDefaultText = "select text and edit"; 3: var addlInstructions = $("span:first").text(); 4: if(addlInstructions == addlInstructionsDefaultText) 5: { 6: addlInstructions = ''; 7: } 8: var orderJson = { 9: AdditionalInstructions: addlInstructions, 10: Customer: { 11: FirstName: $("#firstName").val(), 12: LastName: $("#lastName").val() 13: }, 14: OrderedProduct: { 15: Id: $("#productList").val(), 16: Quantity: $("#quantity").val() 17: } 18: }; 19:  20: // the post to your webservice or page 21: $.ajax({ 22: type: "POST", //GET or POST or PUT or DELETE verb 23: url: "http://localhost:14805/OrderService.svc/SaveOrder", // Location of the service 24: data: JSON.stringify(orderJson), //Data sent to server 25: contentType: "application/json; charset=utf-8", // content type sent to server 26: dataType: "json", //Expected data format from server 27: processdata: false, //True or False 28: success: function (result) {//On Successfull service call 29: window.location.href = "http://localhost:14805/home/ShowOrderDetail/" + result; 30: }, 31: error: function (request, error) {// When Service call fails 32: alert('Service call failed: ' + request.status + ' ' + request.statusText); 33: } 34: }); 35: } The service saves this order into an XML file and returns the order id (a guid). On success, I redirect to the ShowOrderDetail action method passing the guid. This page will show all the details of the order. Although the back-end weightlifting is done by WCF, I did not show any of that plumbing-work as I wanted to concentrate more on the HTML5 and its associates. However, you can see it all in the source here. I do have one issue with HTML5 and this is an existing issue with HTML4 as well. If you see the snippet above where I’ve declared a textbox for first name, you’ll see the autofocus attribute just dangling by itself. It doesn’t follow the xml syntax of ‘key="value"’ allowing users to continue writing badly-formatted html even in the new version. You’ll see the same issue with the ‘contenteditable’ attribute as well. The work-around is that you can do ‘autofocus=”true”’ and it’ll work fine plus make it well-formatted. But unless the standards enforce this, there will be people (me included) who’ll get by, by just typing the bare minimum! Hoping this will get fixed in the coming version-updates. Source code here. Verdict: I think it’s time for us to embrace the new HTML5. Thank you HTML4 and Welcome HTML5.

    Read the article

  • How to create a PPA for C++ program?

    - by piotr
    My questions are: c++/gtkmm project created with NetBeans. How to make package to PPA from this? I have created target files structure (*.desktop, iconfile, ui glade files). Binary goes to /opt/extras.ubuntu.com/myagenda/bin/myagenda. There is also a folder of glade files, that must go to /opt/extras.ubuntu.com/myagenda/bin/myagenda/ui. Desktop file goes to /usr/share/applications/myagenda.desktop. Icon goes to /usr/share/icons/hicolor/scalable/apps/myagenda.svg As you see, there is really small amount of files. Now, how to manage all this stuff, to create package on PPA, which knows where and how put this files to their targets? +-- opt ¦   +-- extras.ubuntu.com ¦   +-- myagenda ¦   +-- bin ¦   ¦   +-- myagenda ¦   +-- ui ¦   +-- item_btn_delete.png ¦   +-- item_btn_edit.png ¦   +-- myagenda.png ¦   +-- myagenda.svg ¦   +-- reminder.png ¦   +-- ui.glade +-- usr +-- share +-- applications ¦   +-- myagenda.desktop +-- icons +-- hicolor +-- scalable +-- apps +-- myagenda.svg Update: Created install file in debian directory with targets: data/myagenda /opt/extras.ubuntu/com/myagenda/bin data/ui/* /opt/extras.ubuntu/com/myagenda/ui data/myagenda.desktop /usr/share/applications data/myagenda.svg /usr/share/icons/hicolor/scalable/apps After dpkg-buildpackage it builds, but for amd64 architecture. Now, trying to change that to i386.

    Read the article

< Previous Page | 749 750 751 752 753 754 755 756 757 758 759 760  | Next Page >