Search Results

Search found 65101 results on 2605 pages for 'big data'.

Page 666/2605 | < Previous Page | 662 663 664 665 666 667 668 669 670 671 672 673  | Next Page >

  • Designing a large database with multiple sources

    - by CatchingMonkey
    I have been tasked with redesigning, or at worst optimising the structure of a database for a data warehouse. Currently, the database has 4 other source databases (which is due to expand to X number of others), all of which have their own data structures, naming conventions etc. At the moment an overnight SSIS package pulls the data from the various source and then for each source coverts the data into a standardised, usable format. These tables are then appended to each other creating a 60m row, 40 column beast!. This table is then used in a variety of ways from an OLAP cube to a web front end. The structure has been in place for a very long time, and the work I have been able to prove the advantages of normalisation, and this is the way I would like to go. The problem for me is that the overnight process takes so long I don't then want to spend additional time normalising the last table into something usable. Can anyone offer any insight or ideas into the best way to restructure or optimise the database efficiently? Edit: All the databases are MS SQL Server 2008 R2 Thanks in advance CM

    Read the article

  • Is my current employer expecting too much?

    - by priyank patel
    This is my first job as a programmer.I am working on ASP.NET/C#,HTML,CSS,Javascript/Jquery. I am working for a firm which develops software for small banking firms. Currently they have their software running in 100 firms.Their software is developed in Visual Fox Pro. I was hired to develop online version of this software.I am the solo developer. My boss is another developer.So my company has two developers. My boss doesnot have any idea about .NET development.I am working on their project since 8 months.The progress is surely there but not very big. I try my best to do what my boss asks.But the project just seems too ambitious for me. The company doesnot have any planning for the project.They just ask me to develop what their older software provides.So I have to deal with front end , back end,review codes , design architecture and etc. I have decided to give my best.I try a lot.But the project sometimes just seems to be overwhelming. So my questions is , is it normal for a programmer to be in this place. I always feel the need to work in atleast a small team if not big one. Are my employers just expecting too much of a fresher.Or is that I being a programmer am lacking the skills to deal with this. I am just not able judge my condition.Also I am paid very low salary.I do work on saturday as well. Can anyone just help me judge this scenario? Any suggestions are welcome.

    Read the article

  • Oracle's SPARC T4, 007 Style

    - by Kristin Rose
    The names 4, T4, and this power house travels hand in hand with its good friend SPARC. About 6 years ago on-chip encryption acceleration was first shipped in a commercial system, the SPARC T1. Today, thanks to Oracle SPARC innovative leadership in on-chip encryption acceleration, complex cryptographic computations was born and has since rapidly evolved. Customers can now have security with performance because we my friend, are in the Age of Big Data.If you need some high speed action in your life, listen here. The SPARC T4 systems offer customers much more value for applications than just increased performance through its cross sell opportunity. This is done by enabling partners to integrate your own applications to Oracle’s SPARC T4 Servers for Cloud deployments, and providing direct business benefits that supersedes the commodity approach to data center computing such as security, performance and optimization.As companies continue down this complex path of big data, eCommerce, and mobility, the need to provide better and more in-depth security is more prominent than ever. Oracle’s SPARC T4 processor allows customers to deliver the highest levels of application security, as well as deliver the necessary level performance without added cost, and complexity.To learn more behind the value of SPARC T4, check out a more in-depth blog here. For more on the SPARC T4 family of products, click here.Encryption Lives Another Day,The OPN Communications Team Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";}

    Read the article

  • How to make a legacy system time-zone sensitive?

    - by Jerry Dodge
    I need to implement time zones in a very large and old Delphi system, where there's a central SQL Server database and possibly hundreds of client installations around the world in different time zones. The application already interacts with the database by only using the date/time of the database server. So, all the time stamps saved in both the database and on the client machines are the date/time of the database server when it happened, never the time of the client machine. So, when a client is about to display the date/time of something (such as a transaction) which is coming from this database, it needs to show the date/time converted to the local time zone. This is where I get lost. I would naturally assume there should be something in SQL to recognize the time zone and convert a DateTime field dynamically. I'm not sure if such a thing exists though. If so, that would be perfect, but if not, I need to figure out another way. This Delphi system (multiple projects) utilizes the SQL Server database using ADO components, VCL data-aware controls, and QuickReports (using data sources). So, there's many places where the data goes directly from the database query to rendering on the screen, without any code to actually put this data on the screen. In the end, I need to know when and how should I get the properly converted time? What is the proper way to ensure that I handle Dates and Times correctly in a legacy application?

    Read the article

  • Real Excel Templates 1.5

    - by Tim Dexter
    Not the next installment quite yet, just an update from what I knew yesterday. Right after I posted the Real Excel Templates I. Mike from the PM team got in touch to say he and Shirley had just had a meeting with a customer about the Excel Templates and all the fab features. He included BIPs extended functions, data pre-processing, sub templates and other functionality which was great new news. One caveat, much of the really new stuff, is not quite out in the wild yet. Will let you know as soon as I know more. Shirley and I shared a conversation around being able to re-group data in the templates. It's one of the most powerful features of the RTF template. Providing the ultimate flexibility in layouts. As I wrote yesterday, you need hierarchical data for Excel templates. I stand corrected, 'Of course you can do that in Excel, here's an example' said Shirley 'Very cunning Shirley, very cunning' says I. You can basically use the hidden sheet to re-group the data using native XSL. I'll cover the 'how' later. As you can see Excel templates are the new 'black' with lots of attention and more importantly development cycles to take them forward. Looks like we are going to have a great weekend weather wise here in Colorado. The yard work and pond are beckoning. Maybe the trout will be rising and I can give my rusty fly casting skills a run for their money. I need some stupid fish thou :0) See ya'll next week!

    Read the article

  • Is there any reason not to go directly from client-side Javascript to a database?

    - by Chris Smith
    So, let's say I'm going to build a Stack Exchange clone and I decide to use something like CouchDB as my backend store. If I use their built-in authentication and database-level authorization, is there any reason not to allow the client-side Javascript to write directly to the publicly available CouchDB server? Since this is basically a CRUD application and the business logic consists of "Only the author can edit their post" I don't see much of a need to have a layer between the client-side stuff and the database. I would simply use validation on the CouchDB side to make sure someone isn't putting in garbage data and make sure that permissions are set properly so that users can only read their own _user data. The rendering would be done client-side by something like AngularJS. In essence you could just have a CouchDB server and a bunch of "static" pages and you're good to go. You wouldn't need any kind of server-side processing, just something that could serve up the HTML pages. Opening my database up to the world seems wrong, but in this scenario I can't think of why as long as permissions are set properly. It goes against my instinct as a web developer, but I can't think of a good reason. So, why is this a bad idea? EDIT: Looks like there is a similar discussion here: Writing Web "server less" applications EDIT: Awesome discussion so far, and I appreciate everyone's feedback! I feel like I should add a few generic assumptions instead of calling out CouchDB and AngularJS specifically. So let's assume that: The database can authenticate users directly from its hidden store All database communication would happen over SSL Data validation can (but maybe shouldn't?) be handled by the database The only authorization we care about other than admin functions is someone only being allowed to edit their own post We're perfectly fine with everyone being able to read all data (EXCEPT user records which may contain password hashes) Administrative functions would be restricted by database authorization No one can add themselves to an administrator role The database is relatively easy to scale There is little to no true business logic; this is a basic CRUD app

    Read the article

  • Introducing the Industry's First Analytics Machine, Oracle Exalytics

    - by Manan Goel
    Analytics is all about gaining insights from the data for better decision making. The business press is abuzz with examples of leading organizations across the world using data-driven insights for strategic, financial and operational excellence. A recent study on “data-driven decision making” conducted by researchers at MIT and Wharton provides empirical evidence that “firms that adopt data-driven decision making have output and productivity that is 5-6% higher than the competition”. The potential payoff for firms can range from higher shareholder value to a market leadership position. However, the vision of delivering fast, interactive, insightful analytics has remained elusive for most organizations. Most enterprise IT organizations continue to struggle to deliver actionable analytics due to time-sensitive, sprawling requirements and ever tightening budgets. The issue is further exasperated by the fact that most enterprise analytics solutions require dealing with a number of hardware, software, storage and networking vendors and precious resources are wasted integrating the hardware and software components to deliver a complete analytical solution. Oracle Exalytics In-Memory Machine is the world’s first engineered system specifically designed to deliver high performance analysis, modeling and planning. Built using industry-standard hardware, market-leading business intelligence software and in-memory database technology, Oracle Exalytics is an optimized system that delivers answers to all your business questions with unmatched speed, intelligence, simplicity and manageability. Oracle Exalytics’s unmatched speed, visualizations and scalability delivers extreme performance for existing analytical and enterprise performance management applications and enables a new class of intelligent applications like Yield Management, Revenue Management, Demand Forecasting, Inventory Management, Pricing Optimization, Profitability Management, Rolling Forecast and Virtual Close etc. Requiring no application redesign, Oracle Exalytics can be deployed in existing IT environments by itself or in conjunction with Oracle Exadata and/or Oracle Exalogic to enable extreme performance and best in class user experience. Based on proven hardware, software and in-memory technology, Oracle Exalytics lowers the total cost of ownership, reduces operational risk and provides unprecedented analytical capability for workgroup, departmental and enterprise wide deployments. Click here to learn more about Oracle Exalytics.  

    Read the article

  • How should I take being told that I was wrong?

    - by Chris
    On a fairly important project with short timelines I decided to use SubSonic for straight forward data access. I wired up a handful of forms, created matching database tables and POCO's for each and used SubSonic's simple repository mode for the data access. Everything worked well and I was able to bang these forms out pretty quickly and I moved on to other things. Since that time I have heard that using SubSonic was a 'cowboy move' and that it was implemented 'incorrectly' and that 'the person who used it, didn't even know how to use SubSonic'. What I would like to know is, how should I take this? There were and still are no standards for data access at this company, so there is no violation of a standard. The forms worked exactly as requested and saved the data to the database correctly. And with only spending a few days on the forms instead of weeks, saved a lot of time which was used for other functionality in the project. So in light of all of this, I am confused as to what was 'incorrect'. Am I missing something here? Thanks for your answers.

    Read the article

  • What's New in Business Analytics at Oracle?

    - by jmorourke
    Business Analytics, which includes Business intelligence and Enterprise Performance Management, are top priorities for IT and Finance executives in 2012.  Some of the hot market trends and topics include managing big data, mobile information access, in-memory computing, advanced analytics, predictive modeling, leveraging unstructured data, as well as risk and performance management.  Find out what Oracle is doing about all of this, and what’s new from the market leader in Business Analytics by attending our live webcast event on April 4th titled “Introducing Oracle’s Business Analytics Strategy”.  At this event, you’ll hear about Oracle’s strategy for Business Analytics from Mark Hurd, Oracle President and you can learn about the latest advancements in Oracle’s Business Analytics solutions from Balaji Yelamanchili, SVP of Analytics and Performance Management. The keynote session from Mark and Balaji will be followed by breakout sessions that provide a more in-depth look at what’s new in specific product areas including the latest release of Oracle’s Hyperion Enterprise Performance Management suite, Oracle Business Intelligence Applications and Exalytics In-Memory Machine, Oracle Endeca Information Discovery, Big Data and Advanced Analytics solutions. This event will provide a great opportunity to hear about what’s new in Business Analytics at Oracle, and for attendees to pose questions to Oracle experts during live chat sessions.  Here’s a link to the registration page, and more details about the April 4th event.  We hope to see you (virtually) there! http://www.oracle.com/us/corporate/events/business-analytics/index.html Also, use the following hashtag to follow along on Twitter and share comments during the webcast and Q&A sessions:  #oracleanalytics

    Read the article

  • How should I manage persistent score in Game Center leaderboards?

    - by Omega
    Let's say that I'm developing an iOS RPG where the player gains 1 point per monster kill. The amount of monsters killed is persistent data: it is an endless adventure, and the score keeps on growing. It isn't a "session score" like Fruit Ninja, but rather a "reputation score". There are Game Center leaderboards for that score. Keep killing monsters, your score goes up, and the leaderboards are updated. My problem is that, technically, you can log out and log in using a different Game Center account, kill one monster, and the leaderboards will be updated for the new GC account. Supposing that this score is a big deal, this could be considered as cheating, because if you have a score of 2000, any of your friends who have never played the game can simply log into your iPhone, play the game, and the system will update the score for their accounts, essentially giving them 2000 points in the leaderboards for doing nothing. I have considered linking one GC account to a specific save game. It won't update your score unless you're using the linked GC account. But what if the player actually needs to change their GC account? Technically they would be forced to start a new game and link their account to that profile. How should I prevent this kind of cheat? Essentially, I don't want someone to distribute a high schore to multiple GC accounts, given the fact that the game updates the score constantly since it isn't a "session score". I do realize that it isn't quite a big deal. But I'm curious about how to avoid this.

    Read the article

  • C IDE for Mac needed

    - by StasM
    I'm looking for heavy-duty C/C++ IDE for Mac that would satisfy the following criteria: Work with big projects (~5000 files, some of them 100K big) efficiently. Have good navigation both file-based and symbol-based - i.e. "go to file", "go to function" etc. with autocompletion support. Support for "go to declaration/definition" for symbols - functions, structures, etc. Auto-adding new files in folders already in the project. Support for code completion for values, function names, etc. At least rudimentary CPP macro understanding - i.e. #define foo bar then foo() should take me either to #define or to actual bar. I understand full CPP parsing may be hard, but I hope for at least the obvious cases. Support for displaying parameter names/types by function name, preferably - integrated with the previous item, for functions defined in the project. Support for libc would be nice too :) (optional) Cross-project search support (I can manage with grep -r if everything else works) (optional) SVN support, at least to some extent (update, commit, mark updated) Is there such editor around? Free would be nice, but I'm ready to part with some money if it's good enough. I'm using TextMate now but I'm not satisfied with it. Tried Xcode but it seems to not be able to handle a large project - it just crashed...

    Read the article

  • SQL SERVER – Copy Database – SQL in Sixty Seconds #067

    - by Pinal Dave
    There are multiple reasons why a user may want to make a copy of the database. Sometimes a user wants to copy the database to the same server and sometime wants to copy the database on a different server. The important point is that DBA and Developer may want copies of their database for various purposes. I copy my database for backup purpose. However, when we hear coping database – the very first thought which comes to our mind is – Backup and Restore or Attach and Detach. Both of these processes have their own advantage and disadvantages. The matter of the fact, those methods is much efficient and recommended methods. However, if you just want to copy your database as it is and do not want to go for advanced feature. You can just use the copy feature of the SQL Server. Here are the settings, which you can use to copy the database. SQL in Sixty Seconds Video I have attempted to explain the same subject in simple words over in following video. Action Item Here are the blog posts I have previously written on the subject of SA password. You can read it over here: Copy Database from Instance to Another Instance – Copy Paste in SQL Server Copy Database With Data – Generate T-SQL For Inserting Data From One Table to Another Table Copy Data from One Table to Another Table – SQL in Sixty Seconds #031 – Video Generate Script for Schema and Data – SQL in Sixty Seconds #021 – Video You can subscribe to my YouTube Channel for frequent updates. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: Joes 2 Pros, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Book Review, SQLAuthority News, T SQL, Video

    Read the article

  • Is comparing an OO compiler to a SQL compiler/optimizer valid?

    - by Brad
    I'm now doing a lot of SQL development at my new job where as before I was doing Object Oriented desktop app stuff. I keep running across very large scripts (thousands of lines) and wanting to refactor in some way. I am seeing that SQL is a different sort of beast and it's probably fine to have these big scripts for the most part but while explaining this to me people are also insisting that the whole idea of refactoring is bad. That stuff like the .NET compiler are actually burdened by refactored code and that a big wall of code is more efficient and better design than code designed for reuse, readability and scalability. The other argument is that OO compilers are almost dangerously inefficient and don't have efficient memory management or runs too many CPU instructions compared to older "simpler" compilers and compared to SQL. Are these valid complaints? Even if some compiler like a C compiler is modestly more "efficient" (whatever that means on this high of a level without seeing code) would you want to write applications in C over C# or Java? Is comparing an OO compiler to a SQL compiler/optimizer even valid?

    Read the article

  • OWB/ODI Users: Last Chance to Submit and Vote On Sessions for OpenWorld 2010

    - by antonio romero
    Now is the last chance for OWB and ODI users to propose new ETL/DW/DI sessions for OpenWorld! Oracle OpenWorld 2010 "Suggest a Session" lets members of the Oracle Mix community submit and vote on papers/talks for OpenWorld. The most popular session proposals will be included in the conference program. One promising OWB-related topic has already been submitted: Case Study: Real-Time Data Warehousing and Fraud Detection with Oracle 11gR2 Dr. Holger Friedrich and consultants from sumIT AG in Switzerland built a real-time data warehouse and accompanying BI system for real-time online fraud detection with very limited resources and a short schedule. His presentation will cover: How sumIT AG efficiently loads complex data feeds in real time in Oracle 11gR2 using, among others, Advanced Queues and XML DB How they lowered costs and sped up development, by leveraging the DBs development features including Oracle Warehouse Builder How they delivered a production-ready solution in a few short months using only three part-time developers Come vote for this proposal, on Oracle Mix: https://mix.oracle.com/oow10/proposals/10566-case-study-real-time-data-warehousing-and-fraud-detection-with-oracle-11gr2  I have already invited members of the OWB/ODI Linkedin group (with over 1400 members) to come vote on topics like this one and propose their own. If enough of us vote on a few topics, we are sure to get some on the agenda!  And if you have your own topics, using the Suggest-a-Session instructions here: http://wiki.oracle.com/page/Oracle+OpenWorld+2010+Suggest-a-Session If you propose a topic, don't forget to come to Linkedin and promote it! I have already sent the members of the Linkedin group an email announcement about this, and I will send another in a week, with links to all topics submitted. Thanks, all!

    Read the article

  • Oracle Exadata X3 announcement at Oracle Openworld

    - by Javier Puerta
    Oracle Announces Oracle Exadata X3 Database In-Memory MachineOracle Press ReleaseFourth Generation Exadata X3 Systems are Ideal for High-End OLTP, Large Data Warehouses, and Database Clouds; Eighth-Rack Configuration Offers New Low-Cost Entry Point During his opening keynote address at Oracle OpenWorld, Oracle CEO, Larry Ellison announced the Oracle Exadata X3 Database In-Memory Machine - the latest generation of its Oracle Exadata Database Machines. The Oracle Exadata X3 Database In-Memory Machine is a key component of the Oracle Cloud. Oracle Exadata X3-2 Database In-Memory Machine and Oracle Exadata X3-8 Database In-Memory Machine can store up to hundreds of Terabytes of compressed user data in Flash and RAM memory, virtually eliminating the performance overhead of reads and writes to slow disk drives, making Exadata X3 systems the ideal database platforms for the varied and unpredictable workloads of cloud computing. In order to realize the highest performance at the lowest cost, the Oracle Exadata X3 Database In-Memory Machine implements a mass memory hierarchy that automatically moves all active data into Flash and RAM memory, while keeping less active data on low-cost disks. With a new Eighth-Rack configuration, the Oracle Exadata X3-2 Database In-Memory Machine delivers a cost-effective entry point for smaller workloads, testing, development and disaster recovery systems, and is a fully redundant system that can be used with mission critical applications. Detailed info at Oracle Exadata Database Machine

    Read the article

  • Podcast Show Notes: Redefining Information Management Architecture

    - by Bob Rhubart-Oracle
    Nothing in IT stands still, and this is certainly true of business intelligence and information management. Big Data has certainly had an impact, as have Hadoop and other technologies. That evolution was the catalyst for the collaborative effort behind a new Information Management Reference Architecture. The latest OTN ArchBeat series features a conversation with Andrew Bond, Stewart Bryson, and Mark Rittman, key players in that collaboration. These three gentlemen know each other quite well, which comes across in a conversation that is as lively and entertaining as it is informative. But don't take my work for it. Listen for yourself! The Panelists(Listed alphabetically) Andrew Bond, head of Enterprise Architecture at Oracle Oracle ACE Director Stewart Bryson, owner and Co-Founder of Red Pill Analytics Oracle ACE Director Mark Rittman, CIO and Co-Founder of Rittman Mead The Conversation Listen to Part 1: The panel discusses how new thinking and new technologies were the catalyst for a new approach to business intelligence projects. Listen to Part 2: Why taking an "API" approach is important in building an agile data factory. Listen to Part 3: Shadow IT, "sandboxing," and how organizational changes are driving the evolution in information management architecture. Additional Resources The Reference Architecture that is the focus of this conversation is described in detail in these blog posts by Mark Rittman: Introducing the Updated Oracle / Rittman Mead Information Management Reference Architecture Part 1: Information Architecture and the Data Factory Part 2: Delivering the Data Factory Be a Guest Producer for an ArchBeat Podcast Want to be a guest producer for an OTN ArchBeat podcast? Click here to learn how to make it happen.

    Read the article

  • Oracle Systems and Solutions at CloudExpo NY 2012

    - by ferhat
    Oracle's Larry Ellison and Mark Hurd just unveiled industy's broadest cloud strategy on June 6, with services based on industry standards, with 100+ enterprise applications live in the Cloud today!  The broadest strategy to support your journey along the cloud in any path chose, at any pace your business require and need. This is great assurance for your journey into the clouds as it is, at the same time, quite a temptation, don't you think? We will be at the Cloud Expo Conference to take place June 11-14 in New York. Oracle is Platinum Plus sponsor of 10th International  Cloud Computing Conference & Expo 2012 East. Oracle is also glad to offer complimentary VIP Gold Passes to the conference. We wish everyone a great and productive time with all  the fellow cloudsters.  We, the systems solutions group at Oracle, have prepared Oracle Optimized Solution for Enterprise Cloud Infrastructure to help you start your Infrastructure-as-a-Service with ease, confidence, speed, and savings.  In this solution we are now bringing together the power of Oracle Solaris and SPARC T4 servers. We will be at the Cloud Bootcamp on Wednesday June 13th discussing how this combination can maximize return on investment and help organizations manage costs for their existing infrastructures or for new enterprise cloud infrastructure design. We will also be at the Expo floor #511 throughout the Cloud Expo conference. Join us for the keynote, general session, and technical sessions with Oracle: Keynote Session: A Pragmatic Journey to the Cloud , Tuesday, June 12, 2012 General Session: Oracle Cloud - An Enterprise Cloud for Business-Critical Applications , Monday, June 11, 2012 Conference Session: Accelerate Enterprise Cloud Deployment and Gain Total Cloud Control, Monday, June 11, 2012 Conference Session: The Java EE 7 Platform: Developing for the Cloud, Monday, June 11, 2012 Conference Session: Integrating Big Data into Your Data Center: A Big Data Reference Architecture, Monday, June 11, 2012 Conference Session: Borderless Applications in the Cloud with Oracle VM and Oracle Virtual Assembly Builder, Tuesday, June 12, 2012 Conference Session: Building a Private, Public, or Hybrid Cloud? Simplify Your Cloud with Oracle’s Complete Cloud Solution,Tuesday, June 12, 2012 Cloud Boot Camp: Building Private IaaS with Oracle Solaris and SPARC, Wednesday, June 13, 2012

    Read the article

  • JSP Include: one large bean or bean for each include

    - by shylynx
    I want to refactor a webapp that consists of very distorted JSPs and servlets. Because we can't switch to a web framework easily we have to keep JSPs and Servlets, and now we are in doubt how to include pages into another and how to setup the use:bean-directives effectively. At the first step we want to decouple the code for the core-actions and the bean-creation into servlets. The servlets should forward to their corresponding pages, which should use the bean. The problem here is, that each jsp consists of different sub- and sub-sub-jsp that are included into another. Here is a shortend extract (because reality is more complex): head header top navigation actionspanel main header actionspanel foot footer Moreover each jsp (also the header and footer) use dynamic data. For example title and actionspanel can change on each page-reload or do have links and labels that depend on the processing by the preceding servlet. I know that jsp-include-directives should only be used for static content und should be avoided for dynamic content. But here we have very large pages, that consist of many parts. Now the core questions: Should I use one big bean for each page, so that each bean holds also data for header and footer beside its core data, so that each subsequent included jsp uses the same bean-directive? For example: DirectoryJSP <- DirectoryBean CompareJSP <- CompareBean Or should I use one bean for each jsp, so that each bean only holds the data for one jsp and its own purpose. For example: DirectoryJSP <- DirectoryBean HeaderJSP <- HeaderBean FooterJSP <- FooterBean CompareJSP <- CompareBean HeaderJSP <- HeaderBean FooterJSP <- FooterBean In the second case: should the subsequent beans be a member of the corresponding parent bean, so that only the parent bean is attached as attribute to the request? Or should each bean attached to the request?

    Read the article

  • SQL Server Interview Questions

    - by Rodney Vinyard
    User-Defined Functions Scalar User-Defined Function A Scalar user-defined function returns one of the scalar data types. Text, ntext, image and timestamp data types are not supported. These are the type of user-defined functions that most developers are used to in other programming languages. Table-Value User-Defined Function An Inline Table-Value user-defined function returns a table data type and is an exceptional alternative to a view as the user-defined function can pass parameters into a T-SQL select command and in essence provide us with a parameterized, non-updateable view of the underlying tables. Multi-statement Table-Value User-Defined Function A Multi-Statement Table-Value user-defined function returns a table and is also an exceptional alternative to a view as the function can support multiple T-SQL statements to build the final result where the view is limited to a single SELECT statement. Also, the ability to pass parameters into a T-SQL select command or a group of them gives us the capability to in essence create a parameterized, non-updateable view of the data in the underlying tables. Within the create function command you must define the table structure that is being returned. After creating this type of user-defined function, I can use it in the FROM clause of a T-SQL command unlike the behavior found when using a stored procedure which can also return record sets.

    Read the article

  • File format for animated scene

    - by stephelton
    I've got a custom OpenGL based rendering engine and I'd like to add support for cinema-type scene animation. The artist that is helping me uses primarily 3DSMax. I'd like a file format for exporting and importing this data. I'm also in need of a file format for skeletal animation data, which may have an impact here. I've been looking at MAXScript to manually export this stuff, which would buy me the most flexibility, but I have virtually no experience with 3DSMax itself, so I get a little lost when it comes to terminology. So I'd like to know what file formats exist for animated scene data, and whether they are appropriate for my use (my fear is that they will be way too broad for my fairly simple needs.) The way I view animated scene data is basically a bunch of references to [animated] models with keyframe-based matrices describing their orientation over time. And probably some special camera stuff to handle perspective. I might also want some event type stuff for adding/removing objects. Is this a sane concept?

    Read the article

  • Infragistics - Now Available! 2 New Packs and Silverlight CTPs

    Infragistics® glows silver this season as we continue to innovate for the Silverlight 3 platform and deliver stockings stuffed with high performance controls needed to quickly and easily create great user experiences in Silverlight; and two new ICON packs guaranteed to make your applications shine. First Silverlight Pivot Grid Now Available Perfect for working with multi-dimensional data, the xamWebPivotGrid™ presents decision makers with highly-interactive pivoting views of business intelligence. Our new high-performance Silverlight charting control, the xamWebDataChart™ enables blazing fast updates every few milliseconds to charts with millions of data points. Both of these controls are planned for the 2010 Volume 1 release of NetAdvantage for Silverlight Data Visualization. Gift to Silverlight Line of Business Applications You’ll be able to deliver superior user experiences in LOB applications with Silverlight RIA services support, a ZIP compression library, a new control persistence framework, and new Silverlight data grid features like unbound columns and template layouts, plus an Office 2007-style ribbon UI for Silverlight. Tis the Season to Add Some Shine The NetAdvantage ICONS Legal Pack adds a touch of legalese to any application user interface with its rich, legal system-themed graphic icons. The NetAdvantage ICONS Education Pack supplies familiar, academic icons that developers can easily add to software reaching students, educators, schools and universities. Sold in themes Packs, ICON packs that are already available are: Web & Commerce, Healthcare, Office Basics, Business & Finance and Software & Computing. Buy any two of the seven packs for $299 USD (MSRP); 3 packs for $399 USD (MSRP); or sold separately for $199 USD (MSRP) each. For more Product details Contact Infragistics:      +1 (800) 231-8588 In Europe (English Speakers):      +44 (0) 20 8387 1474 En France (en langue française):      +33 (0) 800 667 307 Für Deutschland (Deutscher Sprecher):       0800 368 6381 In India:     +91 (80) 6785 1111 span.fullpost {display:none;}

    Read the article

  • Issue 55 - Skin Object Tokens, Optimized Control Panel, OWS Validation and Security, RAD

    April 2010 Welcome to Issue 55 of DNN Creative Magazine In this issue we focus on the new Skin Object token method introduced in DotNetNuke 5 for adding tokens into a DotNetNuke skin. A Skin Object Token is a web user control which covers skin elements such as the logo, menu, search, login links, date, copyright, languages, links, banners, privacy, terms of use, etc. Following this we demonstrate how to install and use two Advanced DotNetNuke Admin Control Panels which are available for free from Oliver Hine. These control panels provide an optimized version of the admin control panel to improve performance and page load times, as well as a ribbon bar control panel which adds additional features. Next, we continue the Open Web Studio tutorials, this month we demonstrate some very advanced techniques for building a car parts application in Open Web Studio. Throughout the tutorial we cover form input, validation, how to use dependant drop down lists, populating checkbox lists and introduce a new concept of data level security. Data level security allows you to control which data a user can access within a module. To finish, we have part five of the "How to Build a News Application with DotNetMushroom Rapid Application Developer (RAD)" article, where we demonstrate how to implement paging. This issue comes complete with 14 videos. Skinning: Skin Object Tokens for DotNetNuke 5 (8 videos - 64mins) Free Module: Advanced Optimized Control Panel by Oliver Hine (1 video - 11mins) Module Development Series: Form Validation, Dependant Drop Downs and Data Level Security in OWS (5 videos - 44mins) How to Implement Paging with DotNetMushroom RAD View issue 55 to download all of the videos in one zip file DNN Creative Magazine for DotNetNuke Web Designers Covering DotNetNuke module video reviews, video tutorials, mp3 interviews, resources and web design tips for working with DotNetNuke. In 55 issues we have created 563 videos!Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Drawing lines in 3D space

    - by DeadMG
    When attempting to draw a line in 3D space with D3DPT_LINELIST, then Direct3D gives me an error about an invalid vertex declaration, saying that it cannot be converted to an FVF. I am using the same vertex declaration and shader/stream setup as for my D3DPT_TRIANGLELIST rendering which works absolutely correctly. How can I use D3DPT_LINELIST to render some lines in 3D space? Edit: Oopsie, forgot my codeses. Here's my raw Draw call. D3DCALL(device->SetStreamSource(1, PerBoneBuffer.get(), 0, sizeof(PerInstanceData))); D3DCALL(device->SetStreamSourceFreq(1, D3DSTREAMSOURCE_INSTANCEDATA | 1)); D3DCALL(device->SetStreamSource(0, LineVerts, 0, sizeof(D3DXVECTOR3))); D3DCALL(device->SetStreamSourceFreq(0, D3DSTREAMSOURCE_INDEXEDDATA | lines.size())); D3DCALL(device->SetIndices(LineIndices)); PerInstanceData* data; std::vector<Wide::Render::Line*> lines_vec(lines.begin(), lines.end()); D3DCALL(PerBoneBuffer->Lock(0, lines.size() * sizeof(PerInstanceData), reinterpret_cast<void**>(&data), D3DLOCK_DISCARD)); std::for_each(lines.begin(), lines.end(), [&](Wide::Render::Line* ptr) { data->Color = D3DXColor(ptr->Colour); D3DXMATRIXA16 Translate, Scale, Rotate; D3DXMatrixTranslation(&Translate, ptr->Start.x, ptr->Start.y, ptr->Start.z); D3DXMatrixScaling(&Scale, ptr->Scale, 1, 1); D3DXMatrixRotationQuaternion(&Rotate, &D3DQuaternion(ptr->Rotation)); data->World = Scale * Rotate * Translate; }); D3DCALL(PerBoneBuffer->Unlock()); D3DCALL(device->DrawIndexedPrimitive(D3DPRIMITIVETYPE::D3DPT_LINELIST, 0, 0, 2, 0, 1)); Here's my vertex declaration: D3DVERTEXELEMENT9 BasicMeshVertices[] = { {0, 0, D3DDECLTYPE_FLOAT3, D3DDECLMETHOD_DEFAULT, D3DDECLUSAGE_POSITION, 0}, {1, 0, D3DDECLTYPE_FLOAT4, D3DDECLMETHOD_DEFAULT, D3DDECLUSAGE_TEXCOORD, 0}, {1, 16, D3DDECLTYPE_FLOAT4, D3DDECLMETHOD_DEFAULT, D3DDECLUSAGE_TEXCOORD, 1}, {1, 32, D3DDECLTYPE_FLOAT4, D3DDECLMETHOD_DEFAULT, D3DDECLUSAGE_TEXCOORD, 2}, {1, 48, D3DDECLTYPE_FLOAT4, D3DDECLMETHOD_DEFAULT, D3DDECLUSAGE_TEXCOORD, 3}, {1, 64, D3DDECLTYPE_FLOAT4, D3DDECLMETHOD_DEFAULT, D3DDECLUSAGE_COLOR, 0}, D3DDECL_END() }; The LineIndices are just 0, 1 and the LineVerts are just {0,0,0} and {1,0,0}, to represent a unit 3D line along the X axis.

    Read the article

  • Dynamic/Adaptive RLE

    - by Lucius
    So, I'm developing a 2D, tile based game and a map maker thingy - all in Java. The problem is that recently I've been having some memory issues when about 4 maps are loaded. Each one of these maps are composed of 128x128 tiles and have 4 layers (for details and stuff). I already spent a good amount of time searching for solutions and the best thing I found was run-length enconding (RLE). It seems easy enough to use with static data, but is there a way to use it with data that is constantly changing, without a big drop in performance? In my maps, supposing I'm compressing the columns, I would have 128 rows, each with some amount of data (hopefully less than it would be without RLE). Whenever I change a tile, that whole row would have to be checked and I'm affraid that would slow down too much the production (and I'm in a somewhat tight schedule). Well, worst case scenario I work on each map individually, and save them using RLE, but it would be really nice if I could avoind that. EDIT: What I'm currently using to store the data for the tiles is a 2D array of HashMaps that use the layer as key and store the id of the tile in that position - like this: private HashMap< Integer, Integer [][]

    Read the article

  • SQLAuthority News – Microsoft Whitepaper – AlwaysOn Solution Guide: Offloading Read-Only Workloads to Secondary Replicas

    - by pinaldave
    SQL Server 2012 has many interesting features but the most talked feature is AlwaysOn. Performance tuning is always a hot topic. I see lots of need of the same and lots of business around it. However, many times when people talk about performance tuning they think of it as a either query tuning, performance tuning, or server tuning. All are valid points, but performance tuning expert usually understands the business workload and business logic before making suggestions. For example, if performance tuning expert analysis workload and realize that there are plenty of reports as well read only queries on the server they can for sure consider alternate options for the same. If read only data is not required real time or it can accept the data which is delayed a bit it makes sense to divide the workload. A secondary replica of the original data which can serve all the read only queries and report is a good idea in most of the cases where there is plenty of workload which is not dependent on the real time data. SQL Server 2012 has introduced the feature of AlwaysOn which can very well fit in this scenario and provide a solution in Read-Only Workloads. Microsoft has recently announced a white paper which is based on absolutely the same subject. I recommend it to read for every SQL Enthusiast who is are going to implement a solution to offload read-only workloads to secondary replicas. Download white paper AlwaysOn Solution Guide: Offloading Read-Only Workloads to Secondary Replicas Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Backup and Restore, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: AlwaysOn

    Read the article

< Previous Page | 662 663 664 665 666 667 668 669 670 671 672 673  | Next Page >