Search Results

Search found 32994 results on 1320 pages for 'second level cache'.

Page 310/1320 | < Previous Page | 306 307 308 309 310 311 312 313 314 315 316 317  | Next Page >

  • PASS Summit 2010 BI Workshop Feedbacks

    - by Davide Mauri
    As many other speakers already did, I’d like to share with the SQL Community the feedback of my PASS Summit 2010 Workshop. For those who were not there, my workshop was the “BI From A-Z” and the main objective of that workshop was to introduce people in the BI world not only from a technical point of view but insist a lot on the methodological and “engineered” approach. The will to put more engineering in the IT (and specially in the BI field) is something that has been growing stronger and stronger in me every day for of this last 5 years since is simply envy the fact that Airbus, Fincatieri, BMW (just to name a few) can create very complex machine “just” using putting people together and giving them some rules to follow (Of course this is an oversimplification but I think you get what I mean). The key point of engineering is that, after having defined the project blueprint, you have the possibility to give to a huge number of people, the rules to follow, the correct tools in order to implement the rules easily and semi-automatically and a way to measure the quality of the results. Could this be done in IT? Very big question, so my scope is now limited to BI. So that’s the main point of my workshop: and entry-level approach to BI (level was 200) in order to allow attendees to know the basics, to understand what tools they should use for which purpose and, above all, a set of rules and tools in order to make a BI solution scalable in terms of people working on it, while still maintaining a very good quality. All done not focusing only on the practice but explaining the theory behind to see how it can help *a lot* to build a correct solution despite the technology used to implement it. The idea is to reach a point where more then 70% of the work done to create a BI solution can be reused even if technologies changes. This is a very demanding challenge nowadays with the coming of Denali and its column-aligned storage and the shiny-new DAX language. As you may understand I was looking forward to get the feedback since you may have noticed that there’s a lot of “architectural” stuff in IT but really nothing on “engineering”. So how the session could be perceived by the attendees was really unknown to me. The feedback could also give a good indication if the need of more “engineering” is something I feel only by myself or if is something more broad. I’m very happy to be able to say that the overall score of 4.75 put my workshop in the TOP 20 session (on near 200 sessions)! Here’s the detailed evaluations: How would you rate the usefulness of the information presented in your day-to-day environment? 4.75 Answer:    # of Responses 3    1         4    12        5    42               How would you rate the Speaker's presentation skills? 4.80 Answer:    # of Responses 3 : 1         4 : 9         5 : 45               How would you rate the Speaker's knowledge of the subject? 4.95 Answer:    # of Responses 4 :  3         5 : 52               How would you rate the accuracy of the session title, description and experience level to the actual session? 4.75 Answer:    # of Responses 3 : 2         4 : 10         5 : 43               How would you rate the amount of time allocated to cover the topic/session? 4.44 Answer:    # of Responses 3 : 7         4 : 17        5 : 31               How would you rate the quality of the presentation materials? 4.62 Answer:    # of Responses 4 : 21        5 : 34 The comments where all very positive. Many of them asked for more time on the subject (or to shorten the very last topics). I’ll make treasure of these comments and will review the content accordingly. We’ll organize a two-day classes on this topic, where also more examples will be shown and some arguments will be explained more deeply. I’d just like to answer a comment that asks how much of what I shown is “universally applicable”. I can tell you that all of our BI project follow these rules and they’ve been applied to different markets (Insurance, Fashion, GDO) with different people and different teams and they allowed us to be “Adaptive” against the customer. The more the rules are well defined and the more there are tools that supports their implementations, the easier is to add new people to the project and to add or change solution features. Think of a car. How come that almost any mechanic can help you to fix a problem? Because they know what to expect. Because there a rules that allow them to identify the problem without having to discover each time how the car has been implemented build. And this is of course also true for car upgrades/improvements. Last but not least: thanks a lot to everyone for coming!

    Read the article

  • Which approach is the most maintainable?

    - by 2rs2ts
    When creating a product which will inherently suffer from regression due to OS updates, which of these is the preferable approach when trying to reduce maintenance cost and the likelihood of needing refactoring, when considering the task of interpreting system state and settings for a lay user? Delegate the responsibility of interpreting the results of inspecting the system to the modules which perform these tasks, or, Separate the concerns of interpretation and inspection into two modules? The first obviously creates a blob in which a lot of code would be verbose, redundant, and hard to grok; the second creates a strong coupling in which the interpretation module essentially has to know what it expects from inspection routines and will have to adapt to changes to the OS just as much as the inspection will. I would normally choose the second option for the separation of concerns, foreseeing the possibility that inspection routines could be re-used, but a developer updating the product to deal with a new OS feature or something would have to not only write an inspection routine but also write an interpretation routine and link the two correctly - and it gets worse for a developer who has to change which inspection routines are used to get a certain system setting, or worse yet, has to fix an inspection routine which broke after an OS patch. I wonder, is it better to have to patch one package a lot or two packages, each somewhat less so?

    Read the article

  • Mercurial says "nothing changed", but it did. Sometimes my software is too clever.

    - by user12608033
    It seems I have found a "bug" in Mercurial. It takes a shortcut when checking for differences in tracked files. If the file's size and modification time are unchanged, it assumes its contents are unchanged: $ hg init . $ cp -p .sccs2hg/2005-06-05_00\:00\:00\,nicstat.c nicstat.c $ ls -ogE nicstat.c -rw-r--r-- 1 14722 2012-08-24 11:22:48.819451726 -0700 nicstat.c $ hg add nicstat.c $ hg commit -m "added nicstat.c" $ cp -p .sccs2hg/2005-07-02_00\:00\:00\,nicstat.c nicstat.c $ ls -ogE nicstat.c -rw-r--r-- 1 14722 2012-08-24 11:22:48.819451726 -0700 nicstat.c $ hg diff $ hg commit nothing changed $ touch nicstat.c $ hg diff diff -r b49cf59d431d nicstat.c --- a/nicstat.c Fri Aug 24 11:21:27 2012 -0700 +++ b/nicstat.c Fri Aug 24 11:22:50 2012 -0700 @@ -2,7 +2,7 @@ * nicstat - print network traffic, Kb/s read and written. Solaris 8+. * "netstat -i" only gives a packet count, this program gives Kbytes. * - * 05-Jun-2005, ver 0.81 (check for new versions, http://www.brendangregg.com) + * 02-Jul-2005, ver 0.90 (check for new versions, http://www.brendangregg.com) * [...] Now, before you agree or disagree with me on whether this is a bug, I will also say that I believe it is a feature. Yes, I feel it is an acceptable shortcut because in "real" situations an edit to a file will change the modification time by at least one second (the resolution that hg diff or hg commit is looking for). The benefit of the shortcut is greatly improved performance of operations like "hg diff" and "hg status", particularly where your repository contains a lot of files. Why did I have no change in modification time? Well, my source file was generated by a script that I have written to convert SCCS change history to Mercurial commits. If my script can generate two revisions of a file within a second, and the files are the same size, then I run afoul of this shortcut. Solution - I will just change my script to apply the modification time from the SCCS history to the file prior to commit. A "touch -t " will do that easily.

    Read the article

  • Gnome- vs Unity-panel (applet) compatibility?

    - by user5676
    I just love the indicator-applet and other parts of the Ayatana-project and think Ubuntu has done an awesome job there. And as the question about applet compatibility seem to be answered as a 'no' I'd like to take the question to the next level - the 'why' and 'why not'. How come these Ayatana-applets today work in gnome-panel but gnome applets won't work in the Unity panel? And - as it's connected - why not make them compatible? Isn't it all about usability?

    Read the article

  • Is using a dedicated thread just for sending gpu commands a good idea?

    - by tigrou
    The most basic game loop is like this : while(1) { update(); draw(); swapbuffers(); } This is very simple but have a problem : some drawing commands can be blocking and cpu will wait while he could do other things (like processing next update() call). Another possible solution i have in mind would be to use two threads : one for updating and preparing commands to be sent to gpu, and one for sending these commands to the gpu : //first thread while(1) { update(); render(); // use gamestate to generate all needed triangles and commands for gpu // put them in a buffer, no command is send to gpu // two buffers will be used, see below pulse(); //signal the other thread data is ready } //second thread while(1) { wait(); // wait for second thread for data to come send_data_togpu(); // send prepared commands from buffer to graphic card swapbuffers(); } also : two buffers would be used, so one buffer could be filled with gpu commands while the other would be processed by gpu. Do you thing such a solution would be effective ? What would be advantages and disadvantages of such a solution (especially against a simpler solution (eg : single threaded with triple buffering enabled) ?

    Read the article

  • How to watch for count of new lines in tail

    - by fl00r
    I want to do something like this: watch tail -f | wc -l #=> 43 #=> 56 #=> 61 #=> 44 #=> ... It counts new lines of tail each second / Linux, CentOs To be more clear. I have got something like this: tail -f /var/log/my_process/*.log | grep error I am reading some error messages. And now I want to count them. How many ~ errors I have got in a second. So one line in a log is one error in a proccess.

    Read the article

  • How to write regex in asp.net MVC 3 razor

    - by anirudha
    here is a small trick to write regex and exceptional code in MVC3. first trick is use @@ instead of @ it’s work don’ worry output goes @ not @@. second trick is that you can use <text></text> tag in MVC to use some code who give error because viewengine does not accept them. like var pattern = new RegExp(/^(("[\w-\s]+")|([\w-]+(?:\.[\w-]+)*)|("[\w-\s]+")([\w-]+(?:\.[\w-]+)*))(@((?:[\w-]+\.)*\w[\w-]{0,66})\.([a-z]{2,6}(?:\.[a-z]{2})?)$)|(@\[?((25[0-5]\.|2[0-4][0-9]\.|1[0-9]{2}\.|[0-9]{1,2}\.))((25[0-5]|2[0-4][0-9]|1[0-9]{2}|[0-9]{1,2})\.){2}(25[0-5]|2[0-4][0-9]|1[0-9]{2}|[0-9]{1,2})\]?$)/i);   this regex not work because it’s use @ so replace them to @@ e var pattern = new RegExp(/^(("[\w-\s]+")|([\w-]+(?:\.[\w-]+)*)|("[\w-\s]+")([\w-]+(?:\.[\w-]+)*))(@@((?:[\w-]+\.)*\w[\w-]{0,66})\.([a-z]{2,6}(?:\.[a-z]{2})?)$)|(@@\[?((25[0-5]\.|2[0-4][0-9]\.|1[0-9]{2}\.|[0-9]{1,2}\.))((25[0-5]|2[0-4][0-9]|1[0-9]{2}|[0-9]{1,2})\.){2}(25[0-5]|2[0-4][0-9]|1[0-9]{2}|[0-9]{1,2})\]?$)/i); now it will work. second way is that use text tag in MVC to make them work like <text> function isValidEmailAddress(emailAddress) { var pattern = new RegExp(/^(("[\w-\s]+")|([\w-]+(?:\.[\w-]+)*)|("[\w-\s]+")([\w-]+(?:\.[\w-]+)*))(@((?:[\w-]+\.)*\w[\w-]{0,66})\.([a-z]{2,6}(?:\.[a-z]{2})?)$)|(@\[?((25[0-5]\.|2[0-4][0-9]\.|1[0-9]{2}\.|[0-9]{1,2}\.))((25[0-5]|2[0-4][0-9]|1[0-9]{2}|[0-9]{1,2})\.){2}(25[0-5]|2[0-4][0-9]|1[0-9]{2}|[0-9]{1,2})\]?$)/i); return pattern.test(emailAddress); } </text>

    Read the article

  • SQL 2008 managing 2 Instances on single Physical server

    - by Rajeev
    Hi, I need some calrification about managing SQL server 2008. The scenario is as follows: I have One Windows Physical server at Primary site, I want to have Two different applications database on it, so shall I create two Instances on same server or shall have diffenrent server for another database. First Database is for management purpose while second would be used for Reporting purpose. There is a second database at the secondary site, which will be in Passive mode and I intend to connect them through MSCS. Now, can I have both Instances on Single server and both will work fine? The management database will be used more. Please reply soon. Can both Instances have dedicated reources allocated to them?

    Read the article

  • PAL–Performance Analysis of Logs

    - by GavinPayneUK
    I was doing some research earlier this week on SQL Server related troubleshooting tools and was surprised I’d forgotten about Microsoft’s PAL tool – Performance Analysis of Logs. PAL is a free PowerShell UI based tool from Microsoft that creates a perfmon template which can then be used to capture counters most relevant to a high-level performance review PAL will them give for specific Microsoft server deployments, SQL Server being one of them.  Everyone knows what perfmon does, probably too...(read more)

    Read the article

  • GDD-BR 2010 [1F] Flexible Android Applications

    GDD-BR 2010 [1F] Flexible Android Applications Speaker: Fred Chung Track: Android Time slot: F[15:30 - 16:15] Room: 1 Level: 201 Android provides facilities to make flexible applications that work well for everyone on any piece of hardware running Android. This session will cover localization and internationalization, as well as how to write an app that can detect and adapt to the hardware and software resources available to it. From: GoogleDevelopers Views: 10 0 ratings Time: 37:47 More in Science & Technology

    Read the article

  • Friday tips #2

    - by Chris Kawalek
    Welcome to our second Friday tips blog! You can ask us questions using the hash tag #AskOracleVirtualization on Twitter and we'll do our best to answer them. Today we've got a VDI related question on linked clones: Question: I want to use linked clones with Oracle Virtual Desktop Infrastructure. What are my options? Answer by John Renko, Consulting Developer, Oracle: First, linked clones are available with the Oracle VirtualBox hypervisor only. Second, your choice of storage will affect the rest of your architecture. If you are using a SAN presenting ISCSI LUNS, you can have linked clones with a Oracle Enterprise Linux based hypervisor running VirtualBox. OEL will use OCFS2 to allow VirtualBox to create the linked clones. Because of the OCFS2 requirement, a Solaris based VirtualBox hypervisor will not be able to support linked clones on remote ISCSI storage. If you using the local storage option on your hypervisors, you will have linked clones with Solaris or Linux based hypervisors running VirtualBox. In all cases, Oracle Virtual Desktop Infrastructure makes the right selection for creating clones - sparse or linked - behind the scenes. Plan your architecture accordingly if you want to ensure you have the higher performing linked clones.

    Read the article

  • How would one build a relational database on a key-value store, a-la Berkeley DB's SQL interface?

    - by coleifer
    I've been checking out Berkeley DB and was impressed to find that it supported a SQL interface that is "nearly identical" to SQLite. http://docs.oracle.com/cd/E17076_02/html/bdb-sql/dbsqlbasics.html#identicalusage I'm very curious, at a high-level, how this kind of interface might have been architected. For instance: since values are "transparent", how do you efficiently query and sort by value how are limits and offsets performed efficiently on large result sets how would the keys be structured and serialized for good average-case performance

    Read the article

  • How to enable ufw firewall to allow icmp response?

    - by Jeremy Hajek
    I have a series of Ubuntu 10.04 servers and each one has ufw firewall enabled. I have allowed port 22 (for SSH) and 80 (if it's a webserver). My question is that I am trying to enable icmp echo response (ping reply). ICMP functions differently than other protocols--I know it is below the IP level in a technical sense. You can just type sudo ufw allow 22, but you cannot type sudo ufw allow icmp How should attack this problem?

    Read the article

  • 5.1 surround sound

    - by rocker9455
    Ok, So i've always had trouble with enabling 5.1 in ubuntu. Running 'alsamixer': I have: Master, Heaphones, PCM, Front, Front Mi, Front Mi, Surround, Center All are at 100% Card:HDA Intel Chip:Realtek ALC888 (This is my onboard sound, Its a dell studio, with 7.1 integrated sound) Running "speaker-test -c6 -twav" I only get the front 2 speakers (Right/Left) making any noise. The others make no noise at all. I have no other sound card to use as all my PCI slots are used up. Daemon.conf: ; daemonize = no ; fail = yes ; allow-module-loading = yes ; allow-exit = yes ; use-pid-file = yes ; system-instance = no ; enable-shm = yes ; shm-size-bytes = 0 # setting this 0 will use the system-default, usually 64 MiB ; lock-memory = no ; cpu-limit = no ; high-priority = yes ; nice-level = -11 ; realtime-scheduling = yes ; realtime-priority = 5 ; exit-idle-time = 20 ; scache-idle-time = 20 ; dl-search-path = (depends on architecture) ; load-default-script-file = yes ; default-script-file = ; log-target = auto ; log-level = notice ; log-meta = no ; log-time = no ; log-backtrace = 0 resample-method = speex-float-1 ; enable-remixing = yes ; enable-lfe-remixing = no flat-volumes = no ; rlimit-fsize = -1 ; rlimit-data = -1 ; rlimit-stack = -1 ; rlimit-core = -1 ; rlimit-as = -1 ; rlimit-rss = -1 ; rlimit-nproc = -1 ; rlimit-nofile = 256 ; rlimit-memlock = -1 ; rlimit-locks = -1 ; rlimit-sigpending = -1 ; rlimit-msgqueue = -1 ; rlimit-nice = 31 ; rlimit-rtprio = 9 ; rlimit-rttime = 1000000 ; default-sample-format = s16le ; default-sample-rate = 44100 ; default-sample-channels = 6 ; default-channel-map = front-left,front-right default-fragments = 8 default-fragment-size-msec = 10

    Read the article

  • Backups of Exchange 2007 SP3 using VSS are abnormally large

    - by Stew
    I have recently implemented Veeam backup and recovery 6.0, and have noted when backing up my exchange server via incremental updates, it is transferring way more data than expected. Backup is incremental, and setup to use VSS. VSS is stable and healthy, according to vssadmin. Exchange 2007 SP3 running on Windows Server 2008 R2, just last weekend I installed the latest Rollup for Exchange. I thought the nightly incrementals were large, but perhaps my users really are sending that much mail so I tested taking one incremental backup, waiting 10 minutes and taking a second. The second incremental backup transfered 5.8GB of data. We as an organization are absolutely NOT putting 5.8GB of data on the mail server every 10 minutes. Are there any other veeam users who have seen something similar? Is my test faulted? Are there other considerations for VSS?

    Read the article

  • Batch normalize audio volume on .ogg-files

    - by Embridioum
    Are there any simple tools to normalize an .ogg-library of 10'000+ songs so that the volume is the same throughout all songs? Terminal or GUI doesn't matter, it only need to be simple. One caveat though, I don't want soft interludes/intermissions and ballads blown out of proportion. Preferably the process should find the overall gain of the album (I have all my CD's ripped into separate folders) and normalize the level thereafter.

    Read the article

  • Ethical White Hat SEO Services

    SEO is such a vast amalgamation of features that professionals regularly working on this process have found out loopholes through which results can be manipulated. These unfair and malpractices have resulted in the credibility of the process taking a beating and has also given critics and SEO bashers a chance to level the most outrageous and preposterous allegations against it and its efficacy.

    Read the article

  • Should we encourage coding styles in favor of developer's autonomy, or discourage it in favor of consistency?

    - by Saeed Neamati
    A developer writes if/else blocks with one-line code statements like: if (condition) // Do this one-line code else // Do this one-line code Another uses curly braces for all of them: if (condition) { // Do this one-line code } else { // Do this one-line code } A developer first instantiates an object, then uses it: HelperClass helper = new HelperClass(); helper.DoSomething(); Another developer instantiates and uses the object in one line: new HelperClass().DoSomething(); A developer is more easy with arrays, and for loops: string[] ordinals = new string[] {'First', 'Second', 'Third'}; for (i = 0; i < ordinals.Length; i++) { // Do something } Another writes: List<string> ordinals = new List<string>() {'First', 'Second', 'Third'}; foreach (string ordinal in ordinals) { // Do something } I'm sure that you know what I'm talking about. I call it coding style (cause I don't know what it's called). But whatever we call it, is it good or bad? Does encouraging it have an effect of higher productivity of developers? Should we ask developers to try to write code the way we tell them, so to make the whole system become style-consistent?

    Read the article

  • Understanding the levels of computing

    - by RParadox
    Sorry, for my confused question. I'm looking for some pointers. Up to now I have been working mostly with Java and Python on the application layer and I have only a vague understanding of operating systems and hardware. I want to understand much more about the lower levels of computing, but it gets really overwhelming somehow. At university I took a class about microprogramming, i.e. how processors get hard-wired to implement the ASM codes. Up to now I always thought I wouldn't get more done if learned more about the "low level". One question I have is: how is it even possible that hardware gets hidden almost completely from the developer? Is it accurate to say that the operating system is a software layer for the hardware? One small example: in programming I have never come across the need to understand what L2 or L3 Cache is. For the typical business application environment one almost never needs to understand assembler and the lower levels of computing, because nowadays there is a technology stack for almost anything. I guess the whole point of these lower levels is to provide an interface to higher levels. On the other hand I wonder how much influence the lower levels can have, for example this whole graphics computing thing. So, on the other hand, there is this theoretical computer science branch, which works on abstract computing models. However, I also rarely encountered situations, where I found it helpful thinking in the categories of complexity models, proof verification, etc. I sort of know, that there is a complexity class called NP, and that they are kind of impossible to solve for a big number of N. What I'm missing is a reference for a framework to think about these things. It seems to me, that there all kinds of different camps, who rarely interact. The last few weeks I have been reading about security issues. Here somehow, much of the different layers come together. Attacks and exploits almost always occur on the lower level, so in this case it is necessary to learn about the details of the OSI layers, the inner workings of an OS, etc.

    Read the article

  • Configuring Oracle as a Data Source for SQL Server

    Discover what happens within SQL Server during and after configuring Oracle as a data source. Quite a few objects are created, including a system-level database, numerous jobs running under the SQL Server agent, and flat files created on the file system. Read on to learn more.

    Read the article

  • What Would You Select?

    Software development is a collection of trade offs; performance for speed to market, quick & dirty vs. maintainable, on and on. Most tend to sacrifice user experience at some level for time to market, other do not consider maintainability, reliability....(read more)...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • WordPress - Emergency Access Without Admin Accounts

    In some cases, when you need to do something in a WordPress website, but all you have is only access to WordPress database and FTP, and you cannot get the admin password from the database because it's decrypted. All changes you have to make via some low level MySQL queries, it's hard and easy mistaken. Joost de Valk has written a script for emergency access to WordPress dashboard by changing admin password or creating new user.

    Read the article

< Previous Page | 306 307 308 309 310 311 312 313 314 315 316 317  | Next Page >