Search Results

Search found 64995 results on 2600 pages for 'data import'.

Page 675/2600 | < Previous Page | 671 672 673 674 675 676 677 678 679 680 681 682  | Next Page >

  • Unicode license

    - by Eric Grange
    Unicode Terms of use (http://www.unicode.org/copyright.html) state that any software that uses their data files (or a modification of it) should carry the Unicode license references. It seems to me that most Unicode libraries have functions to check if a character is a digit, a letter, a symbol, etc. ans so will contain a modification of the Unicode Data Files (usually in the form of tables). Does that mean the license applies and all applications that use such Unicode libraries should carry the license? I've checked around, and it appears very few Unicode software do carry the license, though arguable most of those that didn't carry the license were from companies that were members of the Unicode consortium (do they get license exemption?). Some (f.i. Mozilla) are only "Liaison Members", and while their software do not carry the license (AFAICT), they do obviously rely on data derived from those data files. Is Mozilla in breach of the license? Should we carry the license in all apps that include any form of advanced Unicode support? (ie. are bound to rely on the Unicode data files) Or is there some form of broad exemption? (since very very few software out there carries the license)

    Read the article

  • Stairway to T-SQL DML Level 5: The Mathematics of SQL: Part 2

    Joining tables is a crucial concept to understanding data relationships in a relational database. When you are working with your SQL Server data, you will often need to join tables to produce the results your application requires. Having a good understanding of set theory, and the mathematical operators available and how they are used to join tables will make it easier for you to retrieve the data you need from SQL Server.

    Read the article

  • How to process large files in NetLogo? [closed]

    - by user65597
    I am running into problems in NetLogo with large *.csv / *.txt files. The documents can consist of about 1 million data sets and I need to read them (to eventually create a diagram based on the data). With the most straightforward source code, my program needs about 2 minutes to process these files. How should I approach reading such large data files faster in NetLogo? Is NetLogo even suitable for such tasks (as it seems to be designed more for teaching and learning)?

    Read the article

  • Microsoft Sql Server 2008 R2 System Databases

    For a majority of software developers little time is spent understanding the inner workings of the database management systems (DBMS) they use to store data for their applications.  I personally place myself in this grouping. In my case, I have used various versions of Microsoft’s SQL Server (2000, 2005, and 2008 R2) and just recently learned how valuable they really are when I was preparing to deliver a lecture on "SQL Server 2008 R2, System Databases". Microsoft Sql Server 2008 R2 System DatabasesSo what are system databases in MS SQL Server, and why should I know them? Microsoft uses system databases to support the SQL Server DBMS, much like a developer uses config files or database tables to support an application. These system databases individually provide specific functionality that allows MS SQL Server to function. Name Database File Log File Master master.mdf mastlog.ldf Resource mssqlsystemresource.mdf mssqlsystemresource.ldf Model model.mdf modellog.ldf MSDB msdbdata.mdf msdblog.ldf Distribution distmdl.mdf distmdl.ldf TempDB tempdb.mdf templog.ldf Master DatabaseIf you have used MS SQL Server then you should recognize the Master database especially if you used the SQL Server Management Studio (SSMS) to connect to a user created database. MS SQL Server requires the Master database in order for DBMS to start due to the information that it stores. Examples of data stored in the Master database User Logins Linked Servers Configuration information Information on User Databases Resource DatabaseHonestly, until recently I never knew this database even existed until I started to research SQL Server system databases. The reason for this is due largely to the fact that the resource database is hidden to users. In fact, the database files are stored within the Binn folder instead of the standard MS SQL Server database folder path. This database contains all system objects that can be accessed by all other databases.  In short, this database contains all system views and store procedures that appear in all other user databases regarding system information. One of the many benefits to storing system views and store procedures in a single hidden database is the fact it improves upgrading a SQL Server database; not to mention that maintenance is decreased since only one code base has to be mainlined for all of the system views and procedures. Model DatabaseThe Model database as the name implies is the model for all new databases created by users. This allows for predefining default database objects for all new databases within a MS SQL Server instance. For example, if every database created by a user needs to have an “Audit” table when it is  created then defining the “Audit” table in the model will guarantees that the table will be located in every new database create after the model is altered. MSDB DatabaseThe MSDBdatabase is used by SQL Server Agent, SQL Server Database Mail, SQL Server Service Broker, along with SQL Server. The SQL Server Agent uses this database to store job configurations and SQL job schedules along with SQL Alerts, and Operators. In addition, this database also stores all SQL job parameters along with each job’s execution history.  Finally, this database is also used to store database backup and maintenance plans as well as details pertaining to SQL Log shipping if it is being used. Distribution DatabaseThe Distribution database is only used during replication and stores meta data and history information pertaining to the act of replication data. Furthermore, when transactional replication is used this database also stores information regarding each transaction. It is important to note that replication is not turned on by default in MS SQL Server and that the distribution database is hidden from SSMS. Tempdb DatabaseThe Tempdb as the name implies is used to store temporary data and data objects. Examples of this include temp tables and temp store procedures. It is important to note that when using this database all data and data objects are cleared from this database when SQL Server restarts. This database is also used by SQL Server when it is performing some internal operations. Typically, SQL Server uses this database for the purpose of large sort and index operations. Finally, this database is used to store row versions if row versioning or snapsot isolation transactions are being used by SQL Server. Additionally, I would love to hear from others about their experiences using system databases, tables, and objects in a real world environments.

    Read the article

  • How do I find information on who links to my sites?

    - by bobdobbs
    I'm trying to figure out if there's a free way to get information on backlinks to my site. I've had webmaster tools and google analytics set up for years. But I can't find access to data about site backlinks in either toolset. Webmaster tools, under 'traffic'-'links to your site' gives me the same message for all of my sites: "No data available". I haven't been able to find anything in GA that gives any information on backlinks. I've heard of using "links:" as an operator in google search, but for each of my sites, this returns either zero or very few results in cases when I know I have many backlinks. Most of the links simple aren't shown. My thinking is that google maintains a graph of who links to my site, so I figured that they might let me see it. But I can't figure out how. I've found this tool on a spammy website: http://www.backlinkwatch.com. It offers more data than google on my backlines, and offers more results in exchange for a paid subscription. The data it offers for free looks good, but the results are limited and the site has popups and obnoxious ads. So, in short: how do I get data on who links to me? Is there a free way?

    Read the article

  • Interface extension

    - by user877329
    Suppose that I have an input stream interface, which defines a method for reading data. I also have a seekable interface which defines a method for seeking. A natural way of defining a input file is then to implement both input stream and seekable. I want to construct a data decoder from the input stream interface so I can read data from a file or from another stream. The problem is that I also want to implement seek functionality to the data decoder, since I want to be able to step individual records not raw bytes. This is not possible if I only provide an input stream, which does not have the bytewise seek method. Should I skip the seekable interface and add the seek method to input stream instead and force all streams to at least leave it as a nop.

    Read the article

  • for an ajax heavy web application which would be better SOAP or REST?

    - by coder
    I'm building an ajax heavy application (client-side strictly html/css/js) which will be getting all the data and using server business logic via webservices. I know REST seems to be the hot topic but I can't find any good arguments. The main argument seems to be its "light-weight". My impression so far is that wsdl/soap based services are more expressive and allow for more a more complex transfer of data. It appears that soap would be more useful in the application I'm building where the only code consuming the services will be the js downloaded in the client browser. REST on the other hand seems to have a smaller entry barrier and so can be more useful for services like twitter in allowing other developers to consume these services easily. Also, REST seems to Te better suited for simple data transfers. So in summary SOAP is useful for complex data transfer and REST is useful in simple data transfer. I'm currently under the impression that using SOAP would be best due to the complexity of the messages but perhaps there's other factors. What are your thoughts on the pros/cons of soap/rest for a heavy ajax web app?

    Read the article

  • Optimizing perceived load time for social sharing widgets on a page?

    - by Lucka
    I have placed the facebook "like" and some other social bookmarking websites link on my blog, such as Google Buzz, Digg, Twitter, etc. I just noticed that it takes a while to load my blog page as it need to load the data from the social networking sites (such as number of likes etc). How can I place the links efficiently so that first my blog content loads, and meanwhile it loads data from these websites -- in other words, these sharing widgets should not hang my blog page while waiting for data from external sites?

    Read the article

  • How to backup MySQL (mysqldump) when Memcached installed?

    - by cewebugil
    The server OS is CentOS, with Memcached installed Before Memcached installed, I use mysqldump -u root -p --lock-tables --add-locks --disable-keys --skip-extended-insert --quick wcraze > /var/backup/backup.sql But now, Memcached has been installed. According to Wikipedia; When the table is full, subsequent inserts cause older data to be purged in least recently used (LRU) order. This means new data entry is not directly saved in MySQL, but saved in Memcached instead, until limit_maxbytes is full, the least accessed data will be saved in MySQL. This means, some data is not in the MySQL but in Memcached. So, when backup, the new entry is not in the backup data What is the right way to backup?

    Read the article

  • proxy software that supports parallel transfer

    - by est
    Hi guys, I need to setup a really fast proxy server in a remote server, here's the scenario: The server prefetches 3KB of data, mostly HTTP resources. The server send to client 3KB of data, instead of traditional HTTP or SOCKS proxy, the server open multithreaded transfer with 3 connections, send 1KB of data per thread to each connection The client receives 1KBx3, and combine them to the original 3KB data, and return as a local HTTP proxy server. The client display the original data in browser via the local HTTP proxy The latency is not important as long as the transfer rate is good. Does any software like this exist? It's better if it's open source or free ones.

    Read the article

  • Microsoft Sql Server 2008 R2 System Databases

    For a majority of software developers little time is spent understanding the inner workings of the database management systems (DBMS) they use to store data for their applications.  I personally place myself in this grouping. In my case, I have used various versions of Microsoft’s SQL Server (2000, 2005, and 2008 R2) and just recently learned how valuable they really are when I was preparing to deliver a lecture on "SQL Server 2008 R2, System Databases". Microsoft Sql Server 2008 R2 System DatabasesSo what are system databases in MS SQL Server, and why should I know them? Microsoft uses system databases to support the SQL Server DBMS, much like a developer uses config files or database tables to support an application. These system databases individually provide specific functionality that allows MS SQL Server to function. Name Database File Log File Master master.mdf mastlog.ldf Resource mssqlsystemresource.mdf mssqlsystemresource.ldf Model model.mdf modellog.ldf MSDB msdbdata.mdf msdblog.ldf Distribution distmdl.mdf distmdl.ldf TempDB tempdb.mdf templog.ldf Master DatabaseIf you have used MS SQL Server then you should recognize the Master database especially if you used the SQL Server Management Studio (SSMS) to connect to a user created database. MS SQL Server requires the Master database in order for DBMS to start due to the information that it stores. Examples of data stored in the Master database User Logins Linked Servers Configuration information Information on User Databases Resource DatabaseHonestly, until recently I never knew this database even existed until I started to research SQL Server system databases. The reason for this is due largely to the fact that the resource database is hidden to users. In fact, the database files are stored within the Binn folder instead of the standard MS SQL Server database folder path. This database contains all system objects that can be accessed by all other databases.  In short, this database contains all system views and store procedures that appear in all other user databases regarding system information. One of the many benefits to storing system views and store procedures in a single hidden database is the fact it improves upgrading a SQL Server database; not to mention that maintenance is decreased since only one code base has to be mainlined for all of the system views and procedures. Model DatabaseThe Model database as the name implies is the model for all new databases created by users. This allows for predefining default database objects for all new databases within a MS SQL Server instance. For example, if every database created by a user needs to have an “Audit” table when it is  created then defining the “Audit” table in the model will guarantees that the table will be located in every new database create after the model is altered. MSDB DatabaseThe MSDBdatabase is used by SQL Server Agent, SQL Server Database Mail, SQL Server Service Broker, along with SQL Server. The SQL Server Agent uses this database to store job configurations and SQL job schedules along with SQL Alerts, and Operators. In addition, this database also stores all SQL job parameters along with each job’s execution history.  Finally, this database is also used to store database backup and maintenance plans as well as details pertaining to SQL Log shipping if it is being used. Distribution DatabaseThe Distribution database is only used during replication and stores meta data and history information pertaining to the act of replication data. Furthermore, when transactional replication is used this database also stores information regarding each transaction. It is important to note that replication is not turned on by default in MS SQL Server and that the distribution database is hidden from SSMS. Tempdb DatabaseThe Tempdb as the name implies is used to store temporary data and data objects. Examples of this include temp tables and temp store procedures. It is important to note that when using this database all data and data objects are cleared from this database when SQL Server restarts. This database is also used by SQL Server when it is performing some internal operations. Typically, SQL Server uses this database for the purpose of large sort and index operations. Finally, this database is used to store row versions if row versioning or snapsot isolation transactions are being used by SQL Server. Additionally, I would love to hear from others about their experiences using system databases, tables, and objects in a real world environments.

    Read the article

  • Process for migrating Dropbox to SpiderOak

    - by Marcel Janus
    I want to move my data from dropbox to SpiderOak. I have 3 computers running dropbox. But I have a poor WAN connection with very limited upload bandwidth. So I thought I do as first step install the dropbox client on my server on the internet an download there my data from dropbox. Then after this I upload/backup my data from this server with a broadband connection to SpiderOak. After the backup is completed I setup the sync between my 3 computers so that they will not have to upload the data again. Will this process will work so that I don't have to upload my data again over my WAN connection at home?

    Read the article

  • How to display image in second layer in Cocos2d

    - by PeterK
    I am very new at Cocos2d and is testing to displaying an image over the "Hello World" text on a second layer and need help to get it work. I guess it is some basic stuff here and appreciate any tips etc. with this. I know that if i put the display-code (myLayer1) in the "init" it work or do the call [self goHere] from the "init" in myLayer1 it works but i want to call the "goHere" directly. I have the following code: HelloWorld.m: #import "HelloWorldLayer.h" #import "myLayer1.h" // HelloWorldLayer implementation @implementation HelloWorldLayer +(CCScene *) scene { // 'scene' is an autorelease object. CCScene *scene = [CCScene node]; // 'layer' is an autorelease object. HelloWorldLayer *layer = [HelloWorldLayer node]; myLayer1 *layer1 = [myLayer1 node]; // add layer as a child to scene [scene addChild: layer]; [scene addChild: layer1]; // return the scene return scene; } // on "init" you need to initialize your instance -(id) init { // always call "super" init // Apple recommends to re-assign "self" with the "super" return value if( (self=[super init])) { // create and initialize a Label CCLabelTTF *label = [CCLabelTTF labelWithString:@"Hello World" fontName:@"Marker Felt" fontSize:64]; // ask director the the window size CGSize size = [[CCDirector sharedDirector] winSize]; // position the label on the center of the screen label.position = ccp( size.width /2 , size.height/2 ); // add the label as a child to this Layer [self addChild: label]; myLayer1 *a1 = [myLayer1 new]; [a1 goHere]; [myLayer1 release]; } return self; } myLayer1.m: #import "myLayer1.h" @implementation myLayer1 -(void)goHere { NSLog(@">>>>goHere<<<<"); CGSize size = [[CCDirector sharedDirector] winSize]; CCSprite *vv = [CCSprite spriteWithFile:@"hand.png"]; vv.position = ccp( size.width /2 , size.height/2 ); [self addChild:vv z:3]; } -(id) init { // always call "super" init // Apple recommends to re-assign "self" with the "super" return value if( (self=[super init])) { } return self; } @end

    Read the article

  • Using ASP.NET 3.5 ListView in a Web Application

    This tutorial will show an example of how to use the ListView web control featuring data updating and validation before the data is inserted or updated to the MS SQL server database. Examples of how to use ListView controls to retrieve information from the data are featured in the first part of this tutorial which appeared yesterday.... Test Drive the Next Wave of Productivity Find Microsoft Office 2010 and SharePoint 2010 trials, demos, videos, and more.

    Read the article

  • Introduction to WebCenter Personalization: &ldquo;The Conductor&rdquo;

    - by Steve Pepper
    There are some new faces in the town of WebCenter with the latest 11g PS3 release.  A new component has introduced itself as "Oracle WebCenter Personalization", a.k.a WCP, to simplify delivery of a personalized experience and content to end users.  This posting reviews one of the primary components within WCP: "The Conductor". The Conductor: This ain't just an ordinary cloud... One of the founding principals behind WebCenter Personalization was to provide an open client-side API that remains independent of the technology invoking it, in addition to independence from the architecture running it.  The Conductor delivers this, and much, much more. The Conductor is the engine behind WebCenter Personalization that allows flow-based documents, called "Scenarios", to be managed and executed on the server-side through a well published and RESTful api.      The Conductor also supports an extensible model for custom provider integration that can be easily invoked within a Scenario to promote seamless integration with existing business assets. Introducing the Scenario Conductor Scenarios are declarative offline-authored documents using the custom Personalization JDeveloper bundle included with WebCenter.  A Scenario contains one (or more) statements that can: Create variables that are scoped to the current execution context Iterate over collections, or loop until a specific condition is met Execute one or more statements when a condition is met Invoke other scenarios that exist within the same namespace Invoke a data provider that integrates with custom applications Once a variable is assigned within the Scenario's execution context, it can be referenced anywhere within the same Scenario using the common Expression Language syntax used in J2EE web containers. Scenarios are then published and tested to the Integrated WebLogic Server domain, or published remotely to other domains running WebCenter Personalization. Various Client-side Models The Conductor server API is built upon RESTful services that support a wide variety of clients able to communicate over HTTP.  The Conductor supports the following client-side models: REST:  Popular browser-based languages can be used to manage and execute Conductor Scenarios.  There are other public methods to retrieve configured provider metadata that can be used by custom applications. The Conductor currently supports XML and JSON for it's API syntax. Java: WebCenter Personalization delivers a robust and light-weight java client with the popular Jersey framework as it's foundation.  It has never been easier to write a remote java client to manage remote RESTful services. Expression Language (EL): Allow the results of Scenario execution to control your user interface or embed personalized content using the session-scoped managed bean.  The EL client can also be used in straight JSP pages with minimal configuration. Extensible Provider Framework The Conductor supports a pluggable provider framework for integrating custom code with Scenario execution.  There are two types of providers supported by the Conductor: Function Provider: Function Providers are simple java annotated classes with static methods that are meant to be served as utilities.  Some common uses would include: object creation or instantiation, data transformation, and the like.  Function Providers can be invoked using the common EL syntax from variable assignments, conditions, and loops. For example:  ${myUtilityClass:doStuff(arg1,arg2))} If you are familiar with EL Functions, Function Providers are based on the same concept. Data Provider: Like Function Providers, Data Providers are annotated java classes, but they must adhere to a much more strict object model.  Data Providers have access to a wealth of Conductor services, such as: Access to namespace-scoped configuration API that can be managed by Oracle Enterprise Manager, Scenario execution context for expression resolution, and more.  Oracle ships with three out-of-the-box data providers that supports integration with: Standardized Content Servers(CMIS),  Federated Profile Properties through the Properties Service, and WebCenter Activity Graph. Useful References If you are looking to immediately get started writing your own application using WebCenter Personalization Services, you will find the following references helpful in getting you on your way: Personalizing WebCenter Applications Authoring Personalized Scenarios in JDeveloper Using Personalization APIs Externally Implementing and Calling Function Providers Implementing and Calling Data Providers

    Read the article

  • How does datomic handle "corrections"?

    - by blueberryfields
    tl;dr Rich Hickey describes datomic as a system which implicitly deals with timestamps associated with data storage from my experience, data is often imperfectly stored in systems, and on many occasions needs to retroactively be corrected (ie, often the question of "was a True on Tuesday at 12:00pm?" will have an incorrect answer stored in the database) This seems like a spot where the abstractions behind datomic might break - do they? If they don't, how does the system handle such corrections? Rich Hickey, in several of his talks, justifies the creation of datomic, and explains its benefits. His work, if I understand correctly, is motivated by core the insight that humans, when speaking about data and facts, implicitly associate some of the related context into their work(a date-time). By pushing the work required to manage the implicit date-time component of context into the database, he's created a system which is both much easier to understand, and much easier to program. This turns out to be relevant to most database programmers in practice - his work saves everyone a lot of time managing complex, hard to produce/debug/fix, time queries. However, especially in large databases, data is often damaged/incorrect (maybe it was not input correctly, maybe it eroded over time, etc...). While most database updates are insertions of new facts, and should indeed be treated that way, a non-trivial subset of the work required to manage time-queries has to do with retroactive updates. I have yet to see any documentation which explains how such corrections, or retroactive updates, are handled by datomic; from my experience, they are a non-trivial (and incredibly difficult to deal with) subset of time-related data manipulation that database programmers are faced with. Does datomic gracefully handle such updates? If so, how?

    Read the article

  • Excel chart won't update, based on calculated cells

    - by samJL
    I have an Excel document (2007) with a chart (Clustered Column) that gets its Data Series from cells containing calculated values The calculated values never change directly, but only as a result of other cells in the sheet changing When I change other cells in the sheet, the Data Series cells are recalculated, and show new values - but the Chart based on this Data Series refuses to update automatically I can get the Chart to update by saving/closing, or toggling one of the settings (such as reversing x/y axis and then putting it back), or by re-selecting the Data Series Every solution I have found online doesn't work - I have Calculation set to automatic - Ctrl+Alt+F9 updates everything fine, EXCEPT the chart - I have recreated the chart several times, and on different computers - I have tried VBA scripts like: Application.Calculate Application.CalculateFull Application.CalculateFullRebuild ActiveWorkbook.RefreshAll DoEvents None of these update or refresh the chart I do notice that if I type over my Data Series, actual numbers instead of calculations, it will update the chart - it's as if Excel doesn't want to recognize changes in the calculations Has anyone experienced this before or know what I might do to fix the problem? Thank you

    Read the article

  • How to detect a touch on transparent area of an image in a (libgdx) stage?

    - by Usman
    Can some one please help to detect a touch on an image which I am using as an actor in a stage. The image is actually a long diagnol brush which has plenty of transparent area. The problem is when I touche the transparent area of the brush image it is also triggering the clicklistener of the image. I need the click listener should only be called when the finger actually touched the visible image not the area which is empty. I am using libgdx-0.9.4 libraries. Here is my simple piece of code. import com.badlogic.gdx.scenes.scene2d.ui.Image; import com.badlogic.gdx.scenes.scene2d.ui.ClickListener; Image brushImg = new Image(ImageCache.getTexture("brush")); brushImg.width = mStage.width()*0.75f; brushImg.height = mStage.height()*0.75f; brushImg.setClickListener(new ClickListener() { @Override public void click(Actor actor, float x, float y) { SoundFactory.play("brush"); }

    Read the article

  • BI&EPM in Focus November 2012

    - by Mike.Hallett(at)Oracle-BI&EPM
    Customers ·       San Diego Unified School District Harnesses Attendance, Procurement, and Operational Data with Oracle Exalytics, Generating $4.4 Million in Savings ·       NilsonGroup chooses Oracle Exalytics In-Memory Machine as their solution to access critical data to keep its stores competitive with real-time Mobile BI:  Video ·       Nykredit, in the Danish Financial Sector, describes their experiences from testing the Exalytics Business Intelligence Machine: Video  ·       Sodexo chose Oracle Exalytics as their business analytics platform:  Video ·       AstraZeneca (US, Canada, MedImmune) Improves Insight, Analytics, and Reporting, Enterprisewide with Unified Planning on a Single Platform ·       Experian Consolidates Reporting Systems for One, Global View of Financial Data and Improves Planning for Continued Growth ·       Munchkin Gives its Line of Children’s Products Plenty of Room to Grow in an Upgraded Enterprise Application Environment ·       Top 20 EPM Customer Snapshots, in One Handy Booklet (link) ·       Customer and Partner Successes: Link to Complete Archive Enterprise Performance Management ·       Nov 15: Is Hope and Email the Core of your Reconciliation Process? (link) ·       Replay: Integrated Business Planning, Featuring Leggett & Platt (link) ·       Whitepaper: The New Competitive Advantage - Strategic CIO's Embrace the Cloud (link) ·       Press: Oracle Enterprise Performance Management Driving Significant Improvements in Budget Management and Reporting for Public Sector Organizations (link) ·       Enterprise Performance Management Video Feature Overviews, Now Available on YouTube (link) ·       NEW Solution Brief - Oracle Hyperion Planning on the Oracle Exalytics In-Memory Machine (link) ·       For Insurance sector: Datasheet for new release V2.0 - Oracle Quantitative Management and Reporting for Solvency II (link) ·       Whitepaper FSN 2012: Managing Risk and Uncertainty, an Executive's Guide to Integrated Business Planning (link) ·       NEW Datasheet for Oracle Planning and Budgeting Cloud Service (link) ·       Blog: Planning in the Cloud - For Real Business Intelligence ·       Press: Latest Release of Oracle Exalytics In-Memory Machine Software Enables Customers to View and Analyze Data at the Speed of Business (link) ·       Press: New Release (11.1.1.6.2BP1) of Oracle Business Intelligence Enables Users to Quickly Access and Analyze Key Business Information, Anytime, Anywhere (link) ·       Mark Hurd Interviewed on USA Today about Big Data & Analytics (link) ·       Whitepaper: Mastering Big Data - CFO Strategies to Transform Insight into Opportunity (link) ·       Nov 15: Improve Asset Utilization. Achieve Greater Profitability: Oracle Enterprise Asset Management Analytics (link) ·       Replay: Oracle Enterprise Asset Mgmt Analytics and Oracle Manufacturing Analytics (link) ·       Overload to Impact: An Industry Scorecard on Big Data Business Challenges (link) ·       Webcast Replay: Overview of Oracle Endeca Informational Discovery (link) ·       OBIEE 11g: Required and Recommended Patches and Patch Sets (link)

    Read the article

  • MarteEngine Tile Collision

    - by opiop65
    I need to add collision to my tile map using MarteEngine. MarteEngine is built of of slick2D. Here's my tile generation code: Code: public void render(GameContainer gc, StateBasedGame game, Graphics g) throws SlickException { for (int x = 0; x < 16; x++) { for (int y = 0; y < 16; y++) { map[x][y] = AIR; air.draw(x * GameWorld.tilesize, y * GameWorld.tilesize); } } for (int x = 0; x < 16; x++) { for (int y = 7; y < 8; y++) { map[x][y] = GRASS; grass.draw(x * tilesize, y * tilesize); } } for (int x = 0; x < 16; x++) { for (int y = 8; y < 10; y++) { map[x][y] = DIRT; dirt.draw(x * tilesize, y * tilesize); } } for (int x = 0; x < 16; x++) { for (int y = 10; y < 16; y++) { map[x][y] = STONE; stone.draw(x * tilesize, y * tilesize); } } super.render(gc, game, g); } And one of my tile classes (they're all the same, the image names are just different): Code: package MarteEngine; import org.newdawn.slick.Image; import org.newdawn.slick.SlickException; import it.randomtower.engine.entity.Entity; public class Grass extends Entity { public static Image grass = null; public Grass(float x, float y) throws SlickException { super(x, y); grass = new Image("res/grass.png"); setHitBox(0, 0, 50, 50); addType(SOLID); } } I tried to do it like this: Code: for (int x = 0; x < 16; x++) { for (int y = 7; y < 8; y++) { map[x][y] = GRASS; Grass.grass.draw(x * tilesize, y * tilesize); } } But it gave me a NullPointerException. No idea why, everything looks initialized right? I would be very grateful for some help!

    Read the article

  • How to dynamically generate PDF documents

    - by Thomas
    I want to build a web application for generating stylish PDF documents. The layout should be based on a design templates and the data should come dynamically from the database. Ideally I want to design the template in a "publishing like" tool with placeholders and replace these placeholders by the web application with the data from the database. Think of something like an invoice generator, where a customer could choose from different invoice templates and the invoice data itself coming from the DB. Thanks for your ideas!

    Read the article

  • Inside Red Gate - Exercising Externally

    - by simonc
    Over the next few weeks, we'll be performing experiments on SmartAssembly to confirm or refute various hypotheses we have about how people use the product, what is stopping them from using it to its full extent, and what we can change to make it more useful and easier to use. Some of these experiments can be done within the team, some within Red Gate, and some need to be done on external users. External testing Some external testing can be done by standard usability tests and surveys, however, there are some hypotheses that can only be tested by building a version of SmartAssembly with some things in the UI or implementation changed. We'll then be able to look at how the experimental build is used compared to the 'mainline' build, which forms our baseline or control group, and use this data to confirm or refute the relevant hypotheses. However, there are several issues we need to consider before running experiments using separate builds: Ideally, the user wouldn't know they're running an experimental SmartAssembly. We don't want users to use the experimental build like it's an experimental build, we want them to use it like it's the real mainline build. Only then will we get valid, useful, and informative data concerning our hypotheses. There's no point running the experiments if we can't find out what happens after the download. To confirm or refute some of our hypotheses, we need to find out how the tool is used once it is installed. Fortunately, we've applied feature usage reporting to the SmartAssembly codebase itself to provide us with that information. Of course, this then makes the experimental data conditional on the user agreeing to send that data back to us in the first place. Unfortunately, even though this does limit the amount of useful data we'll be getting back, and possibly skew the data, there's not much we can do about this; we don't collect feature usage data without the user's consent. Looks like we'll simply have to live with this. What if the user tries to buy the experiment? This is something that isn't really covered by the Lean Startup book; how do you support users who give you money for an experiment? If the experiment is a new feature, and the user buys a license for SmartAssembly based on that feature, then what do we do if we later decide to pivot & scrap that feature? We've either got to spend time and money bringing that feature up to production quality and into the mainline anyway, or we've got disgruntled customers. Either way is bad. Again, there's not really any good solution to this. Similarly, what if we've removed some features for an experiment and a potential new user downloads the experimental build? (As I said above, there's no indication the build is an experimental build, as we want to see what users really do with it). The crucial feature they need is missing, causing a bad trial experience, a lost potential customer, and a lost chance to help the customer with their problem. Again, this is something not really covered by the Lean Startup book, and something that doesn't have a good solution. So, some tricky issues there, not all of them with nice easy answers. Turns out the practicalities of running Lean Startup experiments are more complicated than they first seem! Cross posted from Simple Talk.

    Read the article

  • Which 3D file formats support multiple animations? [on hold]

    - by Justin
    I'm working on a 3D application that uses Assimp to import 3D models with animations. Personally, I use Blender to create the models and animations. I'm having trouble exporting multiple animations, however. For example, I'd like to have an idle animation, a walk animation, a run animation, etc. So far I've tried COLLADA and DirectX without much success. The COLLADA export will include the first animation, but not any of the others. The DirectX doesn't include any animation. Which 3D file formats support multiple animations? (Preferably one that Assimp can import. Also, the Assimp website says that it doesn't support .blend files with animation, otherwise I'd just do that.)

    Read the article

< Previous Page | 671 672 673 674 675 676 677 678 679 680 681 682  | Next Page >