Search Results

Search found 355 results on 15 pages for 'analyse'.

Page 11/15 | < Previous Page | 7 8 9 10 11 12 13 14 15  | Next Page >

  • Creating a specialised view filtering form in Rails

    - by Schroedinger
    G'day guys, I have a current set of data, and I generate multiple analyses of this data (each analysis into its own active record item called a pricing_interval) using a helper function at the moment. Currently to analyse the set of data, you need a start time(using datetime_select) an integer (using text_field) and a name (using text_field) I would like on submission of the form to be redirected to the index page of my pricing_interval, as the values will be re-generated. Manually generating a range proves that my helper methods work. How would I build a form that on submit would send parameters to a function in the form of (date,integer,name) so that it could immediately begin work whilst redirecting the user to server/pricing_intervals Anything at all would help, I've spent hours over the past few days trying to get the rails form syntax working properly to no avail, a really straightforward guide to what I would implement to get this working would be amazingly appreciated. I've looked through the form guides, as I'm not creating an object, but merely parsing params, there's got to be an easy way to do this, right?

    Read the article

  • Free static checker for C99 code

    - by detly
    I am looking for a free static checker for C99 code (including GCC extensions) with the ability to explicitly say "these preprocessor macros are always defined." I need that last part because I am compiling embedded code for a single target processor. The compiler (Microchip's C32, GCC based) sets a macro based on the selected processor, which is then used in the PIC32 header files to select a processor-specific header file to include. cppcheck therefore fails because it detects the 30 different #ifdefs used to select one of the many possible PIC32 processors, tries to analyse all possible combinations of these plus all other #defines, and fails. For example, if splint could process C99 code, I would use splint -D__PIC32_FEATURE_SET__=460 -D__32MX460F512L__ \ -D__LANGUAGE_C__ -I/path/to/my/includes source.c

    Read the article

  • Algorithm on trajectory analysis.

    - by Arman
    Hello, I would like to analyse the trajectory data based on given templates. I need to stack the similar trajectories together. The data is a set of coordinates xy,xy,xy and the templates are again the lines defined by the set of control points. I don't know to what direction to go, maybe to Neural Networks or pattern recognition? Could you please advace me page, book or library to start with? kind regards Arman. PS. Is it the right place to ask the question?

    Read the article

  • R and SPSS difference

    - by sfactor
    i will be analysing vast amount of network traffic related data shortly. i will pre-process the data in order to analyse it. i have found that R and SPSS are among the most popular tools for statistical analysis. i will also be generating quite a lot of graphs and charts. so i was wondering what is the basic difference between these two softwares. i am not asking which one is better. i just wanted to know what are the difference in terms of workflow between the two besides the fact that SPSS has a GUI. I will be mostly working with scripts in either case anyway so i wanted to know about the other differences.

    Read the article

  • SQL Server - Schema/Code Analysis Rules - What would your rules include?

    - by Randy Minder
    We're using Visual Studio Database Edition (DBPro) to manage our schema. This is a great tool that, among the many things it can do, can analyse our schema and T-SQL code based on rules (much like what FxCop does with C# code), and flag certain things as warnings and errors. Some example rules might be that every table must have a primary key, no underscore's in column names, every stored procedure must have comments etc. The number of rules built into DBPro is fairly small, and a bit odd. Fortunately DBPro has an API that allows the developer to create their own. I'm curious as to the types of rules you and your DB team would create (both schema rules and T-SQL rules). Looking at some of your rules might help us decide what we should consider. Thanks - Randy

    Read the article

  • How to crawl retweets of a certain user?

    - by Xiong
    I studied the Twitter API Documentation today. Only find that we could use "Twitter REST API Method: statuses user_timeline" to acquire statuses of a certain user. Retweets are stripped out of the user_timeline for backwards compatibility reasons. If I want retweets included, API Documentation recommend "statuses retweeted_by_me", but retweeted_by_me cannot return the retweets by other users. I think maybe we can analyse the twitter webpage of a certain user to get his retweets. However is there any elegant way to crawl retweets of a certain user? Thanks in advance!

    Read the article

  • AS3: How to access pixel data efficiently?

    - by JonoRR
    I'm working a game. The game requires entities to analyse an image and head towards pixels with specific properties (high red channel, etc.) I've looked into Pixel Bender, but this only seems useful for writing new colors to the image. At the moment, even at a low resolution (200x200) just one entity scanning the image slows to 1-2 Frames/second. I'm embedding the image and instance it as a Bitmap as a child of the stage. The 1-2 FPS situation is using BitmapData.getPixel() (on each pixel) with a distance calculation beforehand. I'm wondering if there's any way I can do this more efficiently... My first thought was some sort of spatial partioning coupled with splitting the image up into many smaller pieces. I also feel like Pixel Bender should be able to help somehow, however I've had little experience with it. Cheers for any help. Jonathan

    Read the article

  • Phusion Passenger on Ubuntu not seeing plugin in vendors directory

    - by armyofgnomes
    Phusion Passenger, running on Ubuntu Hardy Heron, is bombing on a require 'lingua/en/readability'. The plugin is installed in the plugins directory and works fine with script/server, just not Passenger. Error Message: source file that the application requires, is missing. * It is possible that you didn't upload your application files correctly. Please check whether all your application files are uploaded. * A required library may not installed. Please install all libraries that this application requires. Further information about the error may have been written to the application's log file. Please check it in order to analyse the problem. Error message: no such file to load -- lingua/en/readability (MissingSourceFile)

    Read the article

  • Is there an equivalent to Java's ClassFileTransformer in .NET? (a way to replace a class)

    - by Alix
    I've been searching for this for quite a while with no luck so far. Is there an equivalent to Java's ClassFileTransformer in .NET? Basically, I want to create a class CustomClassFileTransformer (which in Java would implement the interface ClassFileTransformer) that gets called whenever a class is loaded, and is allowed to tweak it and replace it with the tweaked version. I know there are frameworks that do similar things, but I was looking for something more straightforward, like implementing my own ClassFileTransformer. Is it possible? EDIT #1. More details about why I need this: Basically, I have a C# application and I need to monitor the instructions it wants to run in order to detect read or write operations to fields (operations Ldfld and Stfld) and insert some instructions before the read/write takes place. I know how to do this (except for the part where I need to be invoked to replace the class): for every method whose code I want to monitor, I must: Get the method's MethodBody using MethodBase.GetMethodBody() Transform it to byte array with MethodBody.GetILAsByteArray(). The byte[] it returns contains the bytecode. Analyse the bytecode as explained here, possibly inserting new instructions or deleting/modifying existing ones by changing the contents of the array. Create a new method and use the new bytecode to create its body, with MethodBuilder.CreateMethodBody(byte[] il, int count), where il is the array with the bytecode. I put all these tweaked methods in a new class and use the new class to replace the one that was originally going to be loaded. An alternative to replacing classes would be somehow getting notified whenever a method is invoked. Then I'd replace the call to that method with a call to my own tweaked method, which I would tweak only the first time is invoked and then I'd put it in a dictionary for future uses, to reduce overhead (for future calls I'll just look up the method and invoke it; I won't need to analyse the bytecode again). I'm currently investigating ways to do this and LinFu looks pretty interesting, but if there was something like a ClassFileTransformer it would be much simpler: I just rewrite the class, replace it, and let the code run without monitoring anything. An additional note: the classes may be sealed. I want to be able to replace any kind of class, I cannot impose restrictions on their attributes. EDIT #2. Why I need to do this at runtime. I need to monitor everything that is going on so that I can detect every access to data. This applies to the code of library classes as well. However, I cannot know in advance which classes are going to be used, and even if I knew every possible class that may get loaded it would be a huge performance hit to tweak all of them instead of waiting to see whether they actually get invoked or not. POSSIBLE (BUT PRETTY HARDCORE) SOLUTION. In case anyone is interested (and I see the question has been faved, so I guess someone is), this is what I'm looking at right now. Basically I'd have to implement the profiling API and I'll register for the events that I'm interested in, in my case whenever a JIT compilation starts. An extract of the blogpost: In your ICorProfilerCallback2::ModuleLoadFinished callback, you call ICorProfilerInfo2::GetModuleMetadata to get a pointer to a metadata interface on that module. QI for the metadata interface you want. Search MSDN for "IMetaDataImport", and grope through the table of contents to find topics on the metadata interfaces. Once you're in metadata-land, you have access to all the types in the module, including their fields and function prototypes. You may need to parse metadata signatures and this signature parser may be of use to you. In your ICorProfilerCallback2::JITCompilationStarted callback, you may use ICorProfilerInfo2::GetILFunctionBody to inspect the original IL, and ICorProfilerInfo2::GetILFunctionBodyAllocator and then ICorProfilerInfo2::SetILFunctionBody to replace that IL with your own. The great news: I get notified when a JIT compilation starts and I can replace the bytecode right there, without having to worry about replacing the class, etc. The not-so-great news: you cannot invoke managed code from the API's callback methods, which makes sense but means I'm on my own parsing the IL code, etc, as opposed to be able to use Cecil, which would've been a breeze. I don't think there's a simpler way to do this without using AOP frameworks (such as PostSharp). If anyone has any other idea please let me know. I'm not marking the question as answered yet.

    Read the article

  • How can I debug an application crash in Win7 after it's happened?

    - by parsley72
    I have a Visual Basic 6 application that I've recently changed to use a couple of C++ DLLs I've written in Visual Studio 2008. The application works fine on my PC, but when we install it on one of our test PCs it tends to crash during shutdown - we see the Win 7 message "Your application has failed" or whatever it is. I know Win 7 stores data that can be used to analyse the crash. I've got the source code and .PDB files from the build so I should be able to use that, but I can't figure out where Win 7 stores the data from the crash. The Event Viewer shows the crash but doesn't have any data and the directory C:\Windows\Minidump doesn't exist. Where do the crash files get put?

    Read the article

  • Reading a Delphi binary file in Python

    - by Brendan
    I have a file that was written with the following Delphi declaration ... Type Tfulldata = Record dpoints, dloops : integer; dtime, bT, sT, hI, LI : real; tm : real; data : array[1..armax] Of Real; End; ... Var: fh: File Of Tfulldata; I want to analyse the data in the files (many MB in size) using Python if possible - is there an easy way to read in the data and cast the data into Python objects similar in form to the Delphi records? Does anyone know of a library perhaps that does this?

    Read the article

  • Defined outlets, connected them, everything looks fine, nothing works.

    - by Tom
    Hi! I'm trying to play with a WebView. I made an outlet: IBOutlet UIWebView *browser; Defined it as a property: @property (nonatomic, retain) IBOutlet UIWebView *browser; Synthethized it: @synthesize browser; Finally, I connected it in Interface Builder, really it is. Then I try to do something with it i.e.: [browser loadRequest:[NSURLRequest requestWithURL:[NSURL URLWithString:@"http://apple.com"]]]; Or also: Etape *etape = [[Etape alloc] init]; NSString *html = [etape generateHTMLforEtape:[current_etape objectAtIndex:0]]; [browser loadHTMLString:html baseURL:nil]; [etape release]; I get no errors, I tried to Build & Analyse, no notices or warnings or errors.. I've been searching for one whole day, please help me :/ Thanks a lot! EDIT: Here's screenshots of my connections for my WebView:

    Read the article

  • Memory leak when declaring NSString from ABRecordCopyValue

    - by Ben Thompson
    I am using the following line of code... NSString *clientFirstName = (NSString *)ABRecordCopyValue(person, kABPersonFirstNameProperty); The 'analyse' feature on Xcode is saying that this giving rise to a potential memory leak. I am not releasing clientFirstName at all as I have neither alloc or retain'd it. However, I am conscious that ABRecordCopyValue may not be returning an object as say a command like [NSMutableArray arrayWithArray:someArray] would which might mean I am indeed creating a new object that I control and must release. Keen to hear thoughts...

    Read the article

  • Which can handle a huge surge of queries: SQL Server 2008 Fulltext or Lucene

    - by Luke101
    I am creating a widget that will be installed on several websites and blogs. The widget will analyse the remote webpage title and content, then it will return relevent articles/links on my website. The amount of traffic we expect will be very huge roughly 500K queries a day and up from there. I need the queries to be returned very quickly, so I need the candidate to be high performance, similar to google adsense. The remote title can be from 5 to 50 words and the description we will use no more then 3000 words. Which of these two do you think can handle the load.

    Read the article

  • Setup sonar-runner for multiple java projects

    - by zetafish
    I am trying to run sonar-runner to analyze multiple Java projects in one go. According to the documentation http://docs.codehaus.org/display/SONAR/Analyse+with+a+simple+Java+Runner it is just a matter of creating a sonar-project.properties file for each project. But it is not clear to me where exactly I have to put these sonar-project.properties files. I tried to add multiple .properties files in the $SONAR_RUNNER_HOME/conf folder but the runner does not seem to pick them up. It only sees the sonar-project.properties file. Any suggestions on how to run sonar-runner for multiple projects?

    Read the article

  • How does compiler use lib file ?

    - by Xinus
    I am curious about how c/c++ compiler analyse lib files ? I mean say I create a library containing some classes, I am using that library in my main program. How does compiler know what class names are there in that library. Of course that information is present in binary format, I want to use that functionality in my program, to be specific I have a binary lib file and I want to know all classes and properties/functions present in that lib file. Is it possible ? If compiler can do that why can't some library ? thanks for any clue

    Read the article

  • Write the longest possible loop.

    - by Abhay
    Hello Group, Recently I was asked this question in a technical discussion. What is the longest possible loop that can be written in a programming language? This loop has to be as long as possible and yet not an infinite loop and should not end-up crashing the program (Recursion etc...) I honestly did not know how to attack this problem, so I asked him if is it practically possible. He said using some computer science concepts, you can arrive at a hypothetical number which may not be practical but nevertheless it will still not be infinite. Anyone here; knows how to analyse / attack this problem. P.S. Choosing some highest limit for a type that can store the highest numerical value is apparently not an answer. Thanks in advance,

    Read the article

  • event vs thread programming on server side.

    - by AlxPeter
    We are planning to start a fairly complex web-portal which is expected to attract good local traffic and I've been told by my boss to consider/analyse node.js for the serve side. I think scalability and multi-core support can be handled with an Nginx or Cherokee up in the front. 1) Is this node.js ready for some serious/big business? 2) Does this 'event/asynchronous' paradigm on server side has the potential to support the heavy traffic and data operation ? considering the fact that 'everything' is being processed in a single thread and all the live connections would be lost if it got crashed (though its easy to restart). 3) What are the advantages of event based programming compared to thread based style ? or vice-versa. (I know of higher cost associated with thread switching but hardware can be squeezed with event model.) Following are interesting but contradicting (to some extent) papers:- 1) http://www.usenix.org/events/hotos03/tech/full_papers/vonbehren/vonbehren_html 2) http://pdos.csail.mit.edu/~rtm/papers/dabek:event.pdf

    Read the article

  • Enumerating combinations in a distributed manner

    - by Reyzooti
    I have a problem where I must analyse 500C5 combinations (255244687600) of something. Distributing it over a 10 node cluster where each cluster processes roughly 10^6 combinations per second means the job will be complete in about 7hours. The problem I have is distributing the 255244687600 combinations over the 10 nodes. I'd like to present each node with 25524468760, however the algorithms I'm using can only produce the combinations sequentially, I'd like to be able to pass the set of elements and a range of combination indicies eg: [0-10^7) or [10^7,2.0 10^7) etc and have the nodes themselves figure out the combinations. The algorithms I'm using at the moment are from the following: http://home.roadrunner.com/~hinnant/combinations.html A logical question I've considered using a master node, that enumerates each of the combinations and sends work to each of the nodes, however the overhead incurred in iterating the combinations from a single node and communicating back and forth work is enormous, and will subsequently lead to the master node becoming the bottleneck. Are there any good combination iterating algorithms geared up for efficient/optimal distributed enumeration?

    Read the article

  • antlr: How to rewrite only specific

    - by user1293945
    I am sure antlr can solve my problem, but can't figure out how to implement it, even high level. I rapidly got caught into syntax problems of antlr itself. My grammar is quite simple and made of following tokens and rules. Don't really need to go in their details here. The evaluator resolves to expressions, which finally resolve to IDENT: evaluator : expression EOF! ; ... ... term : PARTICIPANT_TYPE(IDENT | '('! expression ')'! | max | min | if_ | NUMBER)+ ; Now, I would like to analyse and rewrite the 'term', so that IDENT tokens (and them only) get re-written with the PARTICIPANT_TYPE. All the others should simply remain the same.

    Read the article

  • DBCC MEMUSAGE in 2005/8 ?

    - by steveh99999
    I used to like using undocumented command DBCC MEMUSAGE in SQL 2000 to see which tables were using space in SQL data cache. In SQL 2005, this command is not longer present. Instead a DMV – sys.dm_os_buffer_descriptors – can be used to display data cache contents,  but this doesn’t quite give you the same output as DBCC MEMUSAGE. I’m also aware that you can use Quest’s spotlight tool to view a summary of data cache contents. Using  this post by Umachandar Jayachandran  of Microsoft, I was able to create the following equivalent for SQL 2005/8. I’ve wrapped Umachandar’s original query in a CTE to produce summary information :- ;WITH memusage_CTE AS (SELECT bd.database_id, bd.file_id, bd.page_id, bd.page_type , COALESCE(p1.object_id, p2.object_id) AS object_id , COALESCE(p1.index_id, p2.index_id) AS index_id , bd.row_count, bd.free_space_in_bytes, CONVERT(TINYINT,bd.is_modified) AS 'DirtyPage' FROM sys.dm_os_buffer_descriptors AS bd JOIN sys.allocation_units AS au ON au.allocation_unit_id = bd.allocation_unit_id OUTER APPLY ( SELECT TOP(1) p.object_id, p.index_id FROM sys.partitions AS p WHERE p.hobt_id = au.container_id AND au.type IN (1, 3) ) AS p1 OUTER APPLY ( SELECT TOP(1) p.object_id, p.index_id FROM sys.partitions AS p WHERE p.partition_id = au.container_id AND au.type = 2 ) AS p2 WHERE  bd.database_id = DB_ID() AND bd.page_type IN ('DATA_PAGE', 'INDEX_PAGE') ) SELECT TOP 20 DB_NAME(database_id) AS 'Database',OBJECT_NAME(object_id,database_id) AS 'Table Name', index_id,COUNT(*) AS 'Pages in Cache', SUM(dirtyPage) AS 'Dirty Pages' FROM memusage_CTE GROUP BY database_id, object_id, index_id ORDER BY COUNT(*) DESC I’m not 100% happy with the results of the above query however… I’ve noticed that on a busy BizTalk messageBox database  it will return information on pages that contain GHOST rows – . ie where data has already been deleted but has yet to be cleaned-up by a background process – I’m need to investigate further why cache on this server apparently contains so much GHOST data… For more information on the background ghost cleanup process, see this article by Paul Randall. However, I think the results of this query should still be of interest to a DBA. I have another post to come shortly regarding an example I encountered where this information proved useful to me… I notice in SQL 2008, sys.dm_os_buffer_descriptors gained an extra column – numa_mode – I’m interested to see how this is populated and how useful this column can be on a NUMA-enabled system. I’m assuming in theory you could use this column to help analyse how your tables are spread across Numa-enabled data-cache ?

    Read the article

  • Data Structures usage and motivational aspects

    - by Aubergine
    For long student life I was always wondering why there are so many of them yet there seems to be lack of usage at all in many of them. The opinion didn't really change when I got a job. We have brilliant books on what they are and their complexities, but I never encounter resources which would actually give a good hint of practical usage. I perfectly understand that I have to look at problem , analyse required operations, look for data structure that does them efficiently. However in practice I never do that, not because of human laziness syndrome, but because when it comes to work I acknowledge time priority over self-development. Over time I thought that when I would be better developer I will automatically use more of them - that didn't happen at all or maybe I just didn't. Then I found that the colleagues usually in the same plate as me - knowing more or less some three of data structures and being totally happy about it and refusing to discuss this matter further with me, coming back to conversations about 'cool new languages' 'libraries that do jobs for you' and the joy to work under scrumban etc. I am stuck with ArrayLists, Arrays and SortedMap , which no matter what I do always suffice or either I tweak them to be capable of fulfilling my task. Yes, it might be inefficient but do we really have to care if Intel increases performance over years no matter if we improve our skills? Does new Xeon or IBM machines really care what we use? What if I like build things, but I am not particularly excited whether it is n log(n) or just n? Over twenty years the processing power increased enormously, which gives us freedom of not being critical about which one to use? On top of that new more optimized languages appear which support multiple cores more efficiently. To be more specific: I would like to find motivational material on complex real areas/cases of possible effective usages of data structures. I would be really grateful if you would provide relevant resources. There is similar question ,but in the end the links again mostly describe or do dumb example(vehicles, students or holy grail quest - yes, very relevant) them and people keep referring to the "scenario decides the data structure to use". I want to know these complex scenarios to be able to identify similarities to my scenario and then use them. The complex scenarios where it really matters and not necessarily of quantitive nature. It seems that data structures only concern is efficiency and nothing else? There seems to be no particular convenience for developer in use one over another. (only when I found scientific resources on why exactly simple carbohydrates are evil I stopped eating sugar and candies completely replacing it with less harmful fruits - I hope you can see the analogy)

    Read the article

  • Business Intelligence goes Big Data

    - by Alliances & Channels Redaktion
    Big Data stellt die nächste große Herausforderung für die IT-Branche dar: Massen von Daten aus immer mehr Quellen – aus sozialen Netzwerken, Telekommunikations- und Weblogs, RFID-Lesern etc. – müssen logisch verknüpft, in Echtzeit integriert und verarbeitet werden. Doch wie sieht es mit der praktischen Umsetzung aus? Eine europaweite Studie von Steria Mummert Consulting zeigt: Lediglich 28 % der Unternehmen haben bereits heute eine übergreifende, abgestimmte Business-Intelligence-Strategie implementiert. Vorherrschend sind BI-Insellösungen, die schon jetzt an den Grenzen ihrer Kapazität arbeiten. Daten werden also bisher nur eingeschränkt als wertschöpfende Ressource genutzt! Das Ergebnis der Studie klingt erschreckend, doch Unternehmen können es zu Ihrem Vorteil nutzen: Wer jetzt das Thema Big Data anpackt, kann sich einen gewinnbringenden Vorsprung vor dem Wettbewerb sichern. Wie sieht die Analyse-Umgebung der Zukunft aus? Wie und wo kann Big Data für den Geschäftserfolg genutzt werden? Antworten darauf liefert die Kunden-Event Reihe von Oracle und dem Oracle Platinum Partner Steria Mummert Consulting: Hier werden Strategien entwickelt, wie Unternehmen mit Information Discovery ihr BI-Potenzial auf dem Weg zur Big Data Schritt für Schritt ausbauen können. Highlights aus München Durchweg positives Feedback haben wir aus München, der ersten Station der Eventreihe am 23.7., erhalten: Nicht nur die tolle Location, das "La Villa" im Bamberger Haus, überzeugte. Die 31 Teilnehmerinnen und Teilnehmer konnten auch inhaltlich eine Menge mitnehmen – unter anderem einen konkreten Vorschlag für ihre eigene Roadmap in Richtung Big Data. Die Ausgangsfrage des Tages lautete – einfach und umfassend zugleich: Wie können wir den Überblick in einer komplexen Welt behalten? Den Status quo in Europa für Business Intelligence präsentierte Steria Mummert Consulting entlang der Europäischen biMA®-Studie 2012/13. Anhand von Anwendungsbeispielen aus ihrer Praxis präsentierten die geladenen Experten von Oracle und Steria Mummert Consulting verschiedene Lösungsansätze. Eine sehr anschauliche Demo zu Endeca zeigte beispielsweise, wie einfach und flexibel ein Dashboard sein kann: Hier gibt es keine vordefinierten Reports, stattdessen können Entscheider die Filter einfach per Drag & Drop verändern und bekommen so einen individuell sturkturierten Überblick über ihre Daten. Einen Ausblick bot die Session zu Oracle Business Analytics für mobile Anwendungen und Real-Time Decisions. Fazit: eine gelungene Mischung aus Überblicks-Informationen und ganz konkreten Ideen für die spezifischen Anwendungsbereiche der Kunden. Die Eventreihe „BI goes Big Data“ macht im August in Hamburg und Frankfurt Station. Die kostenfreie Veranstaltung findet zusammen mit Steria Mummert Consulting statt und richtet sich an Endkunden. In Hamburg am 14.8.2013 – zur AnmeldungIn Frankfurt a.M. am 20.8.2013 – zur Anmeldung

    Read the article

  • Business Intelligence goes Big Data

    - by Alliances & Channels Redaktion
    Big Data stellt die nächste große Herausforderung für die IT-Branche dar: Massen von Daten aus immer mehr Quellen – aus sozialen Netzwerken, Telekommunikations- und Weblogs, RFID-Lesern etc. – müssen logisch verknüpft, in Echtzeit integriert und verarbeitet werden. Doch wie sieht es mit der praktischen Umsetzung aus? Eine europaweite Studie von Steria Mummert Consulting zeigt: Lediglich 28 % der Unternehmen haben bereits heute eine übergreifende, abgestimmte Business-Intelligence-Strategie implementiert. Vorherrschend sind BI-Insellösungen, die schon jetzt an den Grenzen ihrer Kapazität arbeiten. Daten werden also bisher nur eingeschränkt als wertschöpfende Ressource genutzt! Das Ergebnis der Studie klingt erschreckend, doch Unternehmen können es zu Ihrem Vorteil nutzen: Wer jetzt das Thema Big Data anpackt, kann sich einen gewinnbringenden Vorsprung vor dem Wettbewerb sichern. Wie sieht die Analyse-Umgebung der Zukunft aus? Wie und wo kann Big Data für den Geschäftserfolg genutzt werden? Antworten darauf liefert die Kunden-Event Reihe von Oracle und dem Oracle Platinum Partner Steria Mummert Consulting: Hier werden Strategien entwickelt, wie Unternehmen mit Information Discovery ihr BI-Potenzial auf dem Weg zur Big Data Schritt für Schritt ausbauen können. Highlights aus München Durchweg positives Feedback haben wir aus München, der ersten Station der Eventreihe am 23.7., erhalten: Nicht nur die tolle Location, das "La Villa" im Bamberger Haus, überzeugte. Die 31 Teilnehmerinnen und Teilnehmer konnten auch inhaltlich eine Menge mitnehmen – unter anderem einen konkreten Vorschlag für ihre eigene Roadmap in Richtung Big Data. Die Ausgangsfrage des Tages lautete – einfach und umfassend zugleich: Wie können wir den Überblick in einer komplexen Welt behalten? Den Status quo in Europa für Business Intelligence präsentierte Steria Mummert Consulting entlang der Europäischen biMA®-Studie 2012/13. Anhand von Anwendungsbeispielen aus ihrer Praxis präsentierten die geladenen Experten von Oracle und Steria Mummert Consulting verschiedene Lösungsansätze. Eine sehr anschauliche Demo zu Endeca zeigte beispielsweise, wie einfach und flexibel ein Dashboard sein kann: Hier gibt es keine vordefinierten Reports, stattdessen können Entscheider die Filter einfach per Drag & Drop verändern und bekommen so einen individuell sturkturierten Überblick über ihre Daten. Einen Ausblick bot die Session zu Oracle Business Analytics für mobile Anwendungen und Real-Time Decisions. Fazit: eine gelungene Mischung aus Überblicks-Informationen und ganz konkreten Ideen für die spezifischen Anwendungsbereiche der Kunden. Die Eventreihe „BI goes Big Data“ macht im August in Hamburg und Frankfurt Station. Die kostenfreie Veranstaltung findet zusammen mit Steria Mummert Consulting statt und richtet sich an Endkunden. In Hamburg am 14.8.2013 – zur AnmeldungIn Frankfurt a.M. am 20.8.2013 – zur Anmeldung

    Read the article

  • Analysing SQLBits Feedback

    - by jamiet
    Earlier this week I received all the feedback that people offered on my session at SQLBits 7 in York – “SSIS Dataflow Performance Tuning” (the video is available online if you wish to see it). As you may have gathered from previous posts on this blog and my less-SQLy-focused Wordpress blog I am a big fan of collecting and tracking both personal and public data and session feedback lends itself very well to tracking because it is quantitative rather than qualitative; by that I mean attendees are invited to provide marks out of ten rather than (or, in the case of SQLBits, as well as) written comments. The SQLBits feedback is also useful because they use a consistent format – the same questions are asked each time – this means it is particularly easy to to track whether the scores that people give are trending up or down. I suspect that somewhere the SQLBits organisers have a big Analysis Services cube (ok, perhaps its an Excel pivot table) that allows them to analyse these scores per conference, speaker, track etc.… and there’s no reason that we as session speakers cannot do the same thing. To that end I have started to store my feedback in an Excel spreadsheet of my own which in the interests of transparency is available for public viewing (only a web browser required) on SkyDrive at http://cid-550f681dad532637.office.live.com/view.aspx/Public/Misc/Personal%20SQLBits%20Session%20Feedback.xlsx. I have used a pivot table to aggregate all that feedback and here is a screenshot: I am hereby making a public plea to the SQLBits organisers (on the off-chance that they are reading) to please continue to keep the feedback format consistent in the future and I encourage them to publish all of the feedback in an anonymised form. I would also encourage anyone doing conference speaking to track their conference feedback in the same way that I am doing so that you get an insight into whether or not you are improving over time. It is not difficult to setup and maintaining it as you do more sessions takes very little effort. Storing feedback data like this leads me to wider thoughts about well-known conventions and data format standardisation. Let’s imagine a utopia where there were a standard set of questions for capturing session feedback that were leveraged at every conference regardless of subject matter, location or culture; that would give rise to immense cross-conference and cross-discipline analysis – the data analyst in me goes giddy at the thought of it. It is scenarios like this that drive my interest both in data formats such as iCalendar, microformats and RDF, and in emerging movements such as the semantic web and linked data, all things which I have written about in the past. I don’t know whether we will ever reach the stage where every piece of data has structured, descriptive metadata associated with it but I live in hope. @Jamiet

    Read the article

< Previous Page | 7 8 9 10 11 12 13 14 15  | Next Page >