Search Results

Search found 43123 results on 1725 pages for 'function parameter'.

Page 982/1725 | < Previous Page | 978 979 980 981 982 983 984 985 986 987 988 989  | Next Page >

  • Responsible BI for Excel, Even for Older Versions

    - by andrewbrust
    On Wednesday, I will have the honor of co-presenting, for both The Data Warehouse Institute (TDWI) and the New York Technology Council. on the subject of Excel and BI. My co-presenter will be none other than Bill Baker, who was a Microsoft Distinguished Engineer and, essentially, the father of BI at that company.  Details on the events are here and here. We'll be talking about PowerPivot, of course, but that's not all. Probably even more important than any one product, will be our discussion of whether the usual characterization of Excel as the nemesis of IT, the guilty pleasure of business users and the antithesis of formal BI is really valid and/or hopelessly intractable. Without giving away our punchline, I'll tell you that we are much more optimistic than that. There are huge upsides to Excel and while there are real dangers to using it in the BI space, there are standards and practices you can employ to ensure Excel is used responsibly. And when those practices are followed, Excel becomes quite powerful indeed. One of the keys to this is using Excel as a data consumer rather than data storage mechanism. Caching data in Excel is OK, but only if that data is (a) not modified and (b) configured for automated periodic refresh. PowerPivot meets both criteria -- it stores a read-only copy of your data in the form of a model, and once workbook containing a PowerPivot model is published to SharePoint, it can be configured for scheduled data refresh, on the server, requiring no user intervention whatsoever. Data refresh is a bit like hard drive backup: it will only happen reliably if it's automated, and super-easy to configure. PowerPivot hits a real home run here (as does Windows Home Server for PC backup, but I digress). The thing about PowerPivot is that it's an add-in for Excel 2010. What if you're not planning to go to that new version for quite a while? What if you’ve just deployed Office 2007 in your organization? What if you're still on Office 2003, or an even earlier version? What can you do immediately to share data responsibly and easily? As it turns out, there's a feature in Excel that's been around for quite a while, that can help: Web Queries.  The Web Query feature was introduced, ostensibly, to allow Excel to pull data in from Internet Web pages…for example, data in a stock quote history table will come in nicely, as will any data in a Web page that is displayed in an HTML table.  To use the feature In Excel 2007 or 2010, click the Data Tab or the ribbon and click the “From Web” button towards the left; in older versions use the corresponding option in  the menu or  toolbars.  Next, paste a URL into the resulting dialog box and tap Enter or click the Go button.  A preview of the Web page will come up, and the dialog will allow you to select the specific table within the page whose data you’d like to import.  Here’s an example: Now just click the table, click the Import button, and the Import Data dialog appears.  You can simply click OK to bring in your data or you can first click the Properties… button and configure the data import to be refreshed at an interval in minutes that you select.  Now your data’s in the spreadsheet and ready to worked with: Your data may be vulnerable to modification, but if you’ve set up the data refresh, any accidental or malicious changes will be corrected in time anyway. The thing about this feature is that it’s most useful not for public Web pages, but for pages behind the firewall.  In effect, the Web Query feature provides an incredibly easy way to consume data in Excel that’s “published” from an application.  Users just need a URL.  They don’t need to know server and database names and since the data is read-only, providing credentials may be unnecessary, or can be handled using integrated security.  If that’s not good enough, the Web Query can be saved to a special .iqy file, which can be edited to provide POST parameter data. The only requirement is that the data must be provided in an HTML table, with the first row providing the column names.  From an ASP.NET project, it couldn’t be easier: a simple bound GridView control is totally compatible.  Use a data source control with it, and you don’t even have to write any code.  Users can link to pages that are part of an application’s UI, or developers can create pages that are specially designed for the purpose of providing an interface to the Web Query import feature.  And none of this is Microsoft- or .NET-specific.  You can create pages in any language you want (PHP comes to mind) that output the result set of a query in HTML table format, and then consume that data in a Web Query.  Then build PivotTables and charts on the data, and in Excel 2007 or 2010 you can use conditional formatting to create scorecards and dashboards. This strategy allows you to create pages that function quite similarly to the OData XML feeds rendered when .NET developers create an “Astoria” WCF Data Service.  And while it’s cool that PowerPivot and Excel 2010 can import such OData feeds, it’s good to know that older versions of Excel can function in a similar fashion, and can consume data produced by virtually any Web development platform. As a final matter, instead of just telling you that “older versions” of Excel support this feature, I’ll be more specific.  To discover what the first version of Excel was to support Web queries, go to http://bit.ly/OldSchoolXL.

    Read the article

  • Windows 8 Live Accounts and the actual Windows Account

    - by Rick Strahl
    As if Windows Security wasn't confusing enough, in Windows 8 we get thrown yet another curve ball with Windows Live accounts to logon. When I set up my Windows 8 machine I originally set it up with a 'real', non-live account that I always use on my Windows machines. I did this mainly so I have a matching account for resources around my home and intranet network so I could log on to network resources properly. At some point later I decided to set up Windows Live security just to see how changes things. Windows wants you to use Windows Live Windows 8 logins are required in order for the Windows RT account info to work. Not that I care - since installing Windows 8 I've maybe spent 10 minutes with Windows RT because - well it's pretty freaking sucky on the desktop. From shitty apps to mis-managed screen real estate I can't say that there's anything compelling there to date, but then I haven't looked that hard either. Anyway… I set up the Windows Live account to see if that changes things. It does - I do get all my live logins to work from Live Account so that Twitter and Facebook posts and pictures and calendars all show up on live tiles on the start screen and in the actual apps. That's nice-ish, but hardly that exciting given that all of the apps tied to those live tiles are average at best. And it would have been nice if all of this could be done without being forced into running with a Windows Live User Account - this all feels like strong-arming you into moving into Microsofts walled garden… and that's probably what it's meant to do. Who am I? The real problem to me though is that these Windows Live and raw Windows User accounts are a bit unpredictable especially when it comes to developer information about the account and which credentials to use. So for example Windows reports folder security like this: Notice it's showing my Windows Live account. Now if I go to Edit and try to add my Windows user account (rstrahl) it'll just automatically show up as the live account. On the other hand though the underlying system sees everything as my real Windows account. After I switched to a Windows Live login account and I have to login to Windows with my Live account, what do you suppose this returns?Console.WriteLine(Environment.UserName); It returns my raw Windows user account (rstrahl). All my permissions, all my actual settings and the desktop console altogether run under that account. If I look in TaskManager (or Process Explorer for me) I see: Everything running on the desktop shell with my login running under my Windows user account. I suppose it makes sense, but where is that association happening? When I switched to a Windows Live account, nowhere did I associate my real account with the Live account - it just happened. And looking through the account configuration dialogs I can't find any reference to the raw Windows account. Other than switching back I see no mention anywhere of the raw Windows account - everything refers to the Live account. Right then, clear as potato soup! So this is who you really are! The problem is that in some situations this schizophrenic account behavior gets a bit weird. Today I was running a local Web application in IIS that uses Windows Authentication - I tried to log-in with my real Windows account login because that's what I'm used to using with WINDOWS freaking Authentication through IIS. But… it failed. I checked my IIS settings, my apps login settings and I just could not for the life of me get into the site with my Windows username. That is until I finally realized that I should try using my Windows Live credentials instead. And that worked. So now in this Windows Authentication dialog I had to type in my Live ID and password, which is - just weird. Then in IIS if I look at a Trace page (or in my case my app's Status page) I see that the logged on account is - my Windows user account. What's really annoying about this is that in some places it uses the live account in other places it uses my Windows account. If I remote desktop into my Web server online - I have to use the local authentication dialog but I have to put in my real Windows credentials not the Live account. Oh yes, it's all so terribly intuitive and logical… So in summary, when you log on with a Live account you are actually mapped to an underlying Windows user. In any application if you check the user name it'll be the underlying user account (not sure what happens in a Windows RT app or even what mechanism is used there to get the user name info).  When logging on to local machine resource with user name and password you have to use your Live IDs even if the permissions on the resources are mapped to your underlying Windows account. Easy enough I suppose, but still not exactly intuitive behavior…© Rick Strahl, West Wind Technologies, 2005-2012Posted in Windows   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Is there an established convention for separating Windows file names in a string?

    - by Heinzi
    I have a function which needs to output a string containing a list of file paths. I can choose the separation character but I cannot change the data type (e.g. I cannot return a List<string> or something like that). Wanting to use some well-established convention, my first intuition was to use the semicolon, similar to what Windows's PATH and Java's CLASSPATH (on Windows) environment variables do: C:\somedir\somefile.txt;C:\someotherdir\someotherfile.txt However, I was surprised to notice that ; is a valid character in an NTFS file name. So, is the established best practice to just ignore this fact (i.e. "no sane person should use ; in a file name and if they do, it's their own fault") or is there some other established character for separating Windows paths or files? (The pipe (|) might be a good choice, but I have not seen it used anywhere yet for this purpose.)

    Read the article

  • Ogre3D, OGRE_NEW gives editor errors

    - by automaticoo
    Hello Game Developers, I am trying to get more experienced with OGRE3D and I am following the basic tutorials on the website. All the code samples compile just fine. The first time I encountered a "error" was in Basic Tutorurial 3. The code line where visual studio says something is not right is : mTerrainGroup = OGRE_NEW Ogre::TerrainGroup(mSceneMgr, Ogre::Terrain::ALIGN_X_Z, 513, 12000.0f); Visual studio compiles this line but it shows a ugly red line beneath it and on mouse hover it says "#define OGRE_NEW new (FILE, LINE, FUNCTION) Error: no instance of overloaded 'operator new' matches the argument list" or if I mouse hover over Ogre::TerrainGlobalOptions() it says "Error: expected a type specifier". I searched on google but couldn't find my answer. I got about a year c++ experience but that was mainly with the WinSock libarys. Thanks in advance.

    Read the article

  • So what are zones really?

    - by Bertrand Le Roy
    There is a (not so) particular kind of shape in Orchard: zones. Functionally, zones are places where other shapes can render. There are top-level zones, the ones defined on Layout, where widgets typically go, and there are local zones that can be defined anywhere. These local zones are what you target in placement.info. Creating a zone is easy because it really is just an empty shape. Most themes include a helper for it: Func<dynamic, dynamic> Zone = x => Display(x); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } With this helper, you can create a zone by simply writing: @Zone(Model.Header) .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Let's deconstruct what's happening here with that weird Lambda. In the Layout template where we are working, the Model is the Layout shape itself, so Model.Header is really creating a new Header shape under Layout, or getting a reference to it if it already exists. The Zone function is then called on that object, which is equivalent to calling Display. In other words, you could have just written the following to get the exact same effect: @Display(Model.Header) The Zone helper function only exists to make the intent very explicit. Now here's something interesting: while this works in the Layout template, you can also make it work from any deeper-nested template and still create top-level zones. The difference is that wherever you are, Model is not the layout anymore so you need to access it in a different way: @Display(WorkContext.Layout.Header) This is still doing the exact same thing as above. One thing to know is that for top-level zones to be usable from the widget editing UI, you need one more thing, which is to specify it in the theme's manifest: Name: Contoso Author: The Orchard Team Description: A subtle and simple CMS themeVersion: 1.1 Tags: business, cms, modern, simple, subtle, product, service Website: http://www.orchardproject.net Zones: Header, Navigation, HomeFeaturedImage, HomeFeaturedHeadline, Messages, Content, ContentAside, TripelFirst, TripelSecond, TripelThird, Footer Local zones are just ordinary shapes like global zones, the only difference being that they are created on a deeper shape than layout. For example, in Content.cshtml, you can find our good old code fro creating a header zone: @Display(Model.Header) The difference here is that Model is no longer the Layout shape, so that zone will be local. The name of that local zone is what you specify in placement.info, for example: <Place Parts_Common_Metadata_Summary="Header:1"/> Now here's the really interesting part: zones do not even know that they are zones, and in fact any shape can be substituted. That means that if you want to add new shapes to the shape that some part has been emitting from its driver for example, you can absolutely do that. And because zones are so barebones as shapes go, they can be created the first time they are accessed. This is what enables us to add shapes into a zone before the code that you would think creates it has even run. For example, in the Layout.cshtml template in TheThemeMachine, the BadgeOfHonor shape is being injected into the Footer zone on line 47, even though that zone will really be "created" on line 168.

    Read the article

  • Adding new events to be handled by script part without recompilation of c++ source code

    - by paul424
    As in title I want to write an Event system with handling methods written in external script language, that is Angelscript. The AS methods would have acess to game's world modifing API ( which has to be regsitered for Angelscript Machine as the game starts) . I have come to this problem : if we use the Angelsript for rapid prototyping of the game's world behavior , we would like to be able to define new Event Types, without recompiling the main binary. Is that ever possible , don't stick to Angelscript, I think about any possible scripting language. The main thought is this : If there's some really new Event possible to be constructed , we must monitor some information coming from the binary, for example some variable value changing and the Events are just triggered on some conditions on those changes , wer might be monitoring the Creatures HP, by having a function call on each change, there we could define new Events such as Creature hurt, creature killed, creature healed , creature killed and tormented to pieces ( like geting HP very much below 0 ) . Are there any other ideas of constructing new Events at the scripting side ?

    Read the article

  • HTG Explains: Understanding Routers, Switches, and Network Hardware

    - by Jason Fitzpatrick
    Today we’re taking a look at the home networking hardware: what the individual pieces do, when you need them, and how best to deploy them. Read on to get a clearer picture of what you need to optimize your home network. When do you need a switch? A hub? What exactly does a router do? Do you need a router if you have a single computer? Network technology can be quite an arcane area of study but armed with the right terms and a general overview of how devices function on your home network you can deploy your network with confidence. HTG Explains: Understanding Routers, Switches, and Network Hardware How to Use Offline Files in Windows to Cache Your Networked Files Offline How to See What Web Sites Your Computer is Secretly Connecting To

    Read the article

  • Why am I getting this error : "ExecuteReader: Connection property has not been initialized." [migrated]

    - by Olga
    I'm trying to read .csv file to import its contents to SQL table I'm getting error: ExecuteReader: Connection property has not been initialized. at the last line of this code: Function ImportData(ByVal FU As FileUpload, ByVal filename As String, ByVal tablename As String) As Boolean Try Dim xConnStr As String = "Driver={Microsoft Text Driver (*.txt; *.csv)};dbq=" & Path.GetDirectoryName(Server.MapPath(filename)) & ";extensions=asc,csv,tab,txt;" ' create your excel connection object using the connection string Dim objXConn As New System.Data.Odbc.OdbcConnection(xConnStr.Trim()) objXConn.Open() Dim objCommand As New OdbcCommand(String.Format("SELECT * FROM " & Path.GetFileName(Server.MapPath(filename)), objXConn)) If objXConn.State = ConnectionState.Closed Then objXConn.Open() Else objXConn.Close() objXConn.Open() End If ' create a DataReader Dim dr As OdbcDataReader dr = objCommand.ExecuteReader()

    Read the article

  • This Task Is Currently Locked by a Running Workflow and Cannot Be Edited

    - by Jayant Sharma
    Problem: In SharePoint Workflow, "This task is currently locked by a running workflow and cannot be edited" is the common exception, that we face. Solution: Generally this exception occurs 1.  when the number of items in the Task List gets highThis exception says that the workflow is not able to deliver the all the events at a given time and so the tasks get locked.  Out Of Box, the default event delivery throttle value is 15.  Event delivery throttle value Specifies the number of workflows that can be processed at the same time across all front-end Web serverslook at following link.(http://blogs.msdn.com/b/vincent_runge/archive/2008/09/16/about-the-workflow-eventdelivery-throttle-parameter.aspx)If the value returned by query is superior to the throttle (15 by default), any new workflow event will not be processed immediately. so we need to change it by stsadm command like...stsadm -o setproperty -pn workflow-eventdelivery-throttle -pv "20"(http://technet.microsoft.com/en-us/library/cc287939(office.12).aspx) 2. When we modify a Workflow Task from Custom TaskEdit Page.   when we try to modify the workflow task from outside workflow default Page, like custom workflow taskedit page. then is exception occurs.suppose we have custom task edit page with dropdown  and values are submitted/ Progress/ completed etc and we want to complete task from here. it will throw exception on SPWorkflowTask.AlterTask method, which changes the TaskStatus.When I debug, to find the root cause I actully found that the workflow is not locked. The InternalState flag of the workflow does not include the Locked flag bits(http://msdn.microsoft.com/en-us/library/dd928318(v=office.12).aspx) When I found this link http://geek.hubkey.com/2007/09/locked-workflow.htmlIt is exactly what I wanted. It says that "when the WorkflowVersion of the task list item is not equal to 1" then the error occurs. The solution that is propsed here works fantastically if ((int)task[SPBuiltInFieldId.WorkflowVersion] != 1){    SPList parentList = task.ParentList.ParentWeb.Lists[new Guid(task[SPBuiltInFieldId.WorkflowListId].ToString())];    SPListItem parentItem = parentList.Items.GetItemById((int)task[SPBuiltInFieldId.WorkflowItemId]);    SPWorkflow workflow = parentItem.Workflows[new Guid(task[SPBuiltInFieldId.WorkflowInstanceID].ToString())];    if (!workflow.IsLocked)    {       task[SPBuiltInFieldId.WorkflowVersion] = 1;       task.SystemUpdate();      break;    }} It will reset the workflow version to 1 again.Conclusion: This Exception is completely confusing. So, we need to find at first whether our workflow is really locked or not. If it is really locked then use 1st method. If not, then check the workflow version and set it to 1 again.Jayant Sharma

    Read the article

  • Battling Emacs Pinky?

    - by haziz
    My problem is not so much emacs pinky as much as having to work with multiple machines, across 3 operating systems, both desktop and laptop, with differing keyboard layouts and different locations for Ctrl and Alt\Meta keys so I often have to pause and think about where is the Ctrl key on this machine. How do you deal with varying keyboard layouts, between Mac keyboards (mostly the laptops) and PC keyboards (mostly 101 keys in my case, yes the original PC keyboard)? I have turned the Caps lock Key into a Ctrl key (losing the Caps lock function completely rather than swapping with Ctrl) on most of them but still find myself hunting for the original Ctrl labeled key most of the time. How do you deal with this keyboard confusion? Suggestions, ideas and feedback welcome.

    Read the article

  • Improving performance of fuzzy string matching against a dictionary [closed]

    - by Nathan Harmston
    Hi, So I'm currently working for with using SecondString for fuzzy string matching, where I have a large dictionary to compare to (with each entry in the dictionary has an associated non-unique identifier). I am currently using a hashMap to store this dictionary. When I want to do fuzzy string matching, I first check to see if the string is in the hashMap and then I iterate through all of the other potential keys, calculating the string similarity and storing the k,v pair/s with the highest similarity. Depending on which dictionary I am using this can take a long time ( 12330 - 1800035 entries ). Is there any way to speed this up or make it faster? I am currently writing a memoization function/table as a way of speeding this up, but can anyone else think of a better way to improve the speed of this? Maybe a different structure or something else I'm missing. Many thanks in advance, Nathan

    Read the article

  • UK Connected Systems User Group - Udi Dahan Event Topic change

    - by Michael Stephenson
    Hi Just wanted to get the word out about a change to the may user group event.  Udi Dahan will present a new topic which he has not presented in the UK before.  Details below. To register for this event please refer to: http://ukconnectedsystemsusergroup.org/UpcomingEvents.aspx Title: High Availability - A Contrarian View   Abstract: Many developers are aware of the importance of high availability, critically analyzing any single points of failure in the infrastructure. Those same developers rarely give a second thought to the periods of time when a system is being upgraded. Even if all the servers are running, most systems cannot function in-between versions. Yet with the increased pace of business, users are demanding ever more frequent releases. The poor maintenance programmers and system administrators are left holding the bag long after the architecture that sealed their fate was formulated. Join Udi for some different perspectives on high availability - architecture and methodology for the real world.

    Read the article

  • Is there a "golden ratio" in coding?

    - by badallen
    My coworkers and I often come up with silly ideas such as adding entries to Urban Dictionary that are inappropriate but completely make sense if you are a developer. Or making rap songs that are about delegates, reflections or closures in JS... Anyhow, here is what I brought up this afternoon which was immediately dismissed to be a stupid idea. So I want to see if I can get redemptions here. My idea is coming up with a Golden Ratio (or in the neighborhood of) between the number of classes per project versus the number of methods/functions per class versus the number of lines per method/function. I know this is silly and borderline, if not completely, useless, but just think of all the legacy methods or classes you have encountered that are absolutely horrid - like methods with 10000 lines or classes with 10000 methods. So Golden Ratio, anyone? :)

    Read the article

  • clicktale.com alternative that works with https and ajax

    - by Alexey Ivanov
    I need to record user's actions on site for analytics purposes. The way clicktale.com doing it is just fine. But unfortunately it have problems with working over https and recording ajax events. Is there some service or script/library that I can host that can do this task? Non-free one's are ok to. Clarification: ClickTale function that I want to reproduce is recording of separate user sessions and their replay. So you can see video of all user's interactions with page: There he clicks first, which links opens, etc. Usually such services replay user's actions buy reproducing them with javascript (and here comes ajax problem: external sites can't use ajax because of cross-domain scripting). So I'm looking for a tool (possibly script that I host on site to allow cross-domain scripting) that can record ajax blocks actions.

    Read the article

  • OpenGL Performance Questions

    - by Daniel
    This subject, as with any optimisation problem, gets hit on a lot, but I just couldn't find what I (think) I want. A lot of tutorials, and even SO questions have similar tips; generally covering: Use GL face culling (the OpenGL function, not the scene logic) Only send 1 matrix to the GPU (projectionModelView combination), therefore decreasing the MVP calculations from per vertex to once per model (as it should be). Use interleaved Vertices Minimize as many GL calls as possible, batch where appropriate And possibly a few/many others. I am (for curiosity reasons) rendering 28 million triangles in my application using several vertex buffers. I have tried all the above techniques (to the best of my knowledge), and received almost no performance change. Whilst I am receiving around 40FPS in my implementation, which is by no means problematic, I am still curious as to where these optimisation 'tips' actually come into use? My CPU is idling around 20-50% during rendering, therefore I assume I am GPU bound for increasing performance. Note: I am looking into gDEBugger at the moment Cross posted at StackOverflow

    Read the article

  • Tuning Red Gate: #4 of Some

    - by Grant Fritchey
    First time connecting to these servers directly (keys to the kingdom, bwa-ha-ha-ha. oh, excuse me), so I'm going to take a look at the server properties, just to see if there are any issues there. Max memory is set, cool, first possible silly mistake clear. In fact, these look to be nicely set up. Oh, I'd like to see the ANSI Standards set by default, but it's not a big deal. The default location for database data is the F:\ drive, where I saw all the activity last time. Cool, the people maintaining the servers in our company listen, parallelism threshold is set to 35 and optimize for ad hoc is enabled. No shocks, no surprises. The basic setup is appropriate. On to the problem database. Nothing wrong in the properties. The database is in SIMPLE recovery, but I think it's a reporting system, so no worries there. Again, I'd prefer to see the ANSI settings for connections, but that's the worst thing I can see. Time to look at the queries, tables, indexes and statistics because all the information I've collected over the last several days suggests that we're not looking at a systemic problem (except possibly not enough memory), but at the traditional tuning issues. I just want to note that, I started looking at the system, not the queries. So should you when tuning your environment. I know, from the data collected through SQL Monitor, what my top poor performing queries are, and the most frequently called, etc. I'm starting with the most frequently called. I'm going to get the execution plan for this thing out of the cache (although, with the cache dumping constantly, I might not get it). And it's not there. Called 1.3 million times over the last 3 days, but it's not in cache. Wow. OK. I'll see what's in cache for this database: SELECT  deqs.creation_time,         deqs.execution_count,         deqs.max_logical_reads,         deqs.max_elapsed_time,         deqs.total_logical_reads,         deqs.total_elapsed_time,         deqp.query_plan,         SUBSTRING(dest.text, (deqs.statement_start_offset / 2) + 1,                   (deqs.statement_end_offset - deqs.statement_start_offset) / 2                   + 1) AS QueryStatement FROM    sys.dm_exec_query_stats AS deqs         CROSS APPLY sys.dm_exec_sql_text(deqs.sql_handle) AS dest         CROSS APPLY sys.dm_exec_query_plan(deqs.plan_handle) AS deqp WHERE   dest.dbid = DB_ID('Warehouse') AND deqs.statement_end_offset > 0 AND deqs.statement_start_offset > 0 ORDER BY deqs.max_logical_reads DESC ; And looking at the most expensive operation, we have our first bad boy: Multiple table scans against very large sets of data and a sort operation. a sort operation? It's an insert. Oh, I see, the table is a heap, so it's doing an insert, then sorting the data and then inserting into the primary key. First question, why isn't this a clustered index? Let's look at some more of the queries. The next one is deceiving. Here's the query plan: You're thinking to yourself, what's the big deal? Well, what if I told you that this thing had 8036318 reads? I know, you're looking at skinny little pipes. Know why? Table variable. Estimated number of rows = 1. Actual number of rows. well, I'm betting several more than one considering it's read 8 MILLION pages off the disk in a single execution. We have a serious and real tuning candidate. Oh, and I missed this, it's loading the table variable from a user defined function. Let me check, let me check. YES! A multi-statement table valued user defined function. And another tuning opportunity. This one's a beauty, seriously. Did I also mention that they're doing a hash against all the columns in the physical table. I'm sure that won't lead to scans of a 500,000 row table, no, not at all. OK. I lied. Of course it is. At least it's on the top part of the Loop which means the scan is only executed once. I just did a cursory check on the next several poor performers. all calling the UDF. I think I found a big tuning opportunity. At this point, I'm typing up internal emails for the company. Someone just had their baby called ugly. In addition to a series of suggested changes that we need to implement, I'm also apologizing for being such an unkind monster as to question whether that third eye & those flippers belong on such an otherwise lovely child.

    Read the article

  • How do I get my blacked out ttys back?

    - by con-f-use
    Original Question: After I replaced my Ubuntu 10.10 with 11.04 all I get when I Strg+Alt+F1-6 into a tty is a black screen. Also when I boot there's a while of black screen after grub2 menu is displayed. Then until just before gnome starts it stays black. I have an Nvida Geforce Quadro FX 770M on my HP EliteBook 8530w. How do I get my ttys (aka 'virtual terminals') to work again? My effords in chronological order: So grub and gfx-payload seems to be the problem, I figured. I went along with this guide for higher tty resolution. Which led to the grub2 menu displaying in my native resolution rather than 800x600. The black screen problem remains. I googlehit some bugreports on other nvidia crads having that problem. I tried uninstalling the nvidia driver. No effect. Also tried different resolutions With an older version of the kernel it works. Though not perfectly. The ttys are usable, black screen between grub2 menu and gnome start remains. Not really a solution. Tried so much, that I lost track. Reinstalled grub2 and linux-image-2.6.38-8-generic. Then did this to my /etc/default/grub in accordance with the aforementioned guide (/etc/grub.d/00_header edited as well): GRUB_DEFAULT=0 GRUB_HIDDEN_TIMEOUT=0 #GRUB_HIDDEN_TIMEOUT_QUIET=true GRUB_TIMEOUT=3 GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian` #GRUB_CMDLINE_LINUX_DEFAULT="quiet splash" GRUB_CMDLINE_LINUX="" GRUB_GFXMODE=1680x1050x32 To my surprise I can now use my ttys in native resolution. Black screen between grub2 menu and gnome login screen is still there though. That is annoying since I also use an encrypted disk thus having to enter my passphrase in total dark... Still looking for a solution but urgency is low. Downloaded and installed a later version of nvidia driver. No difference to last edit. Tried GRUB_CMDLINE_LINUX="vga="-parameter. No effect. nomodeset has no effect. not even in combination with vga=... Tried echo FRAMEBUFFER=y > /etc/initramfs-tools/conf.d/splash no effect (see comment) On the verge of resignation... Bounty period soon to end.

    Read the article

  • Google webmaster tools: parameters that only apply on one page

    - by Imagine digital
    I'm trying to get my e-commerce website on google and still figuring out how it all works. Now, I have seen this feature named URL-parameters, allowing me to set different parameters that affect page content to be indexed (one can also set parameters that do not affect the page, but for me that does not apply..). The question I have about this is whether and how I should add parameters that I only have on some pages of my site. example: The homepage of my site is www.mysite.nl. no parameters at all. But when a user clicks the navigation bar, it links to www.mysite.nl/itemList.php?category=&....subCategory=.... The parameters category and subCategory define whether there is content on my itemList page and what content that is. It gets matching products out of my database based on those 2 variables. The question: How do I make sure that I apply the google URL Parameters function decently for my website?

    Read the article

  • Passing text message to web page from web user control

    - by Narendra Tiwari
    Here is a brief summary how we can send a text message to webpage by a web user control. Delegates is the slolution. There are many good articles on .net delegates you can refer some of them below. The scenario is we want to send a text message to the page on completion of some activity on webcontrol. 1/ Create a Base class for webcontrol (refer code below), assuming we are passing some text messages to page from web user control  - Declare a delegate  - Declare an event of type delegate using System; using System.Data; using System.Configuration; using System.Web; using System.Web.Security; using System.Web.UI; using System.Web.UI.WebControls; using System.Web.UI.WebControls.WebParts; using System.Web.UI.HtmlControls; //Declaring delegate with message parameter public delegate void SendMessageToThePageHandler(string messageToThePage); public         } class ControlBase: System.Web.UI.UserControl { public ControlBase() { // TODO: Add constructor logic here }protected override void OnInit(EventArgs e) { base.OnInit(e); }private string strMessageToPass;/// <summary> /// MessageToPass - Property to pass text message to page /// </summary> public string MessageToPass { get { return strMessageToPass; } set { strMessageToPass = value; } }/// <summary> /// SendMessageToPage - Called from control to invoke the event /// </summary> /// <param name="strMessage">Message to pass</param> public void SendMessageToPage(string strMessage) {   if (this.sendMessageToThePage != null)       this.sendMessageToThePage(strMessage); } 2/ Register events on webpage on page Load eventthis.AddControlEventHandler((ControlBase)WebUserControl1); this.AddControlEventHandler((ControlBase)WebUserControl2); /// <summary> /// AddControlEventHandler- Hooking web user control event /// </summary> /// <param name="ctrl"></param> private void AddControlEventHandler(ControlBase ctrl) { ctrl.sendMessageToThePage += delegate(string strMessage) {   //display message   lblMessage.Text = strMessage; }; } References: http://www.akadia.com/services/dotnet_delegates_and_events.html     3/

    Read the article

  • Clarification about Event Producer in StreamInsight

    - by sandy
    I need a small clarification about streamInsight, I know by doc's that StreamInsight can handle multiple concurrent Events. But will the event producer be a separate function, for ex: I need to watch a folder for new Files becoz all my sensors il write readings every day in a new file in particular drive. Method 1: FileSystemWatcher: These is the traditional approach where we write a service using FileSystemWatcher to watch a folder for new files,etc.. Upon receiving event from FileSystemWatcher il perform some operations on these files. How to do these using streamInsight??? I came know that using IObservable i can push events to StreamInsight. But is there anything to watch folder is sreamInsight like FileSystemWatcher. OR In order to raise events to streamInsight do we need to use FileSystemWacther? Any suggestion regarding these is highly appreciated. Thank in Advance

    Read the article

  • Single python file distribution: module or package?

    - by DanielSank
    Suppose I have a useful python function or class (or whatever) called useful_thing which exists in a single file. There are essentialy two ways to organize the source tree. The first way uses a single module: - setup.py - README.rst - ...etc... - foo.py where useful_thing is defined in foo.py. The second strategy is to make a package: - setup.py - README.rst - ...etc... - foo |-module.py |-__init__.py where useful_thing is defined in module.py. In the package case __init__.py would look like this from foo.module import useful_thing so that in both cases you can do from foo import useful_thing. Question: Which way is preferred, and why? EDIT: Since user gnat says this question is poorly formed, I'll add that the official python packaging tutorial does not seem to comment on which of the methods described above is the preferred one. I am explicitly not giving my personal list of pros and cons because I'm interested in whether there is a community preferred method, not generating a discussion of pros/cons :)

    Read the article

  • Create a Self Signed Sertificate on WLS 10.3.5 Supporting SHA 256 Algorthim.

    - by adejuanc
    1) Set domain to call the keytool $. setDomainEnv.sh 2) Generate the key $ keytool -genkey -alias selfsignedcert -keyalg RSA -sigalg SHA256withRSA -keypass privatepassword -keystore identity.jks -storepass password -validity 365 What is your first and last name? [Unknown]: adejuan-desktop.cl.oracle.com What is the name of your organizational unit? [Unknown]: a What is the name of your organization? [Unknown]: e What is the name of your City or Locality? [Unknown]: i What is the name of your State or Province? [Unknown]: o What is the two-letter country code for this unit? [Unknown]: U Is CN=adejuan-desktop.cl.oracle.com, OU=a, O=e, L=i, ST=o, C=U correct? [no]: yes 3) Export the root certificate $ keytool -export -alias selfsignedcert -sigalg SHA256withRSA -file root.cer -keystore identity.jks Enter keystore password: Certificate stored in file <root.cer> 4) Import the root certificate to the trust store $ keytool -import -alias selfsignedcert -sigalg SHA256withRSA -trustcacerts -file root.cer -keystore trust.jks Enter keystore password: Re-enter new password: Owner: CN=adejuan-desktop.cl.oracle.com, OU=a, O=e, L=i, ST=o, C=U Issuer: CN=adejuan-desktop.cl.oracle.com, OU=a, O=e, L=i, ST=o, C=U Serial number: 4f17459a Valid from: Wed Jan 16 15:33:22CLST 2012 until: Thu Jan 15 15:33:22 CLST 2013 Certificate fingerprints: MD5: 7F:08:FA:DE:CD:D5:C3:D3:83:ED:B8:4F:F2:DA:4E:A1 SHA1: 87:E4:7C:B8:D7:1A:90:53:FE:1B:70:B6:32:22:5B:83:29:81:53:4B Signature algorithm name: SHA256withRSA Version: 3 Trust this certificate? [no]: yes Certificate was added to keystore 5) To check the contents of the keystore keytool -v -list -keystore identity.jks Enter keystore password: ***************** WARNING WARNING WARNING ***************** * The integrity of the information stored in your keystore * * has NOT been verified! In order to verify its integrity, * * you must provide your keystore password. * ***************** WARNING WARNING WARNING ***************** Keystore type: JKS Keystore provider: SUN Your keystore contains 1 entry Alias name: selfsignedcert Creation date: Jan 18, 2012 Entry type: PrivateKeyEntry Certificate chain length: 1 Certificate[1]: Owner: CN=adejuan-desktop.cl.oracle.com, OU=a, O=e, L=i, ST=o, C=U Issuer: CN=adejuan-desktop.cl.oracle.com, OU=a, O=e, L=i, ST=o, C=U Serial number: 4f17459a Valid from: Wed Jan 16 15:42:16CLST 2012 until: Thu Jan 15 15:42:16 CLST 2013 Certificate fingerprints: MD5: 7F:08:FA:DE:CD:D5:C3:D3:83:ED:B8:4F:F2:DA:4E:A1 SHA1: 87:E4:7C:B8:D7:1A:90:53:FE:1B:70:B6:32:22:5B:83:29:81:53:4B Signature algorithm name: SHA256withRSA Version: 3 ******************************************* ******************************************* 6) In some cases, this parameter is needed in the server start up parameters. -Dweblogic.ssl.JSSEEnabled=true Otherwise, enable it from the Server configuration -> SSL -> Use JSSE checkbox.

    Read the article

  • My VPS cannot send email

    - by ifdion
    Webmaster Newbie Question I have a low end vps (128MB RAM)running on Debian.I used a bash script by ilevkov to setup the site. After some trial and error, I managed to set up a WordPress on it. Just now I found out that my VPS can't send any email. I tested using the WordPress reset password email, and it shows The e-mail could not be sent. Possible reason: your host may have disabled the mail() function... After some Google session I noticed that I can send email from ssh. So I tried mail [email protected] Subject: Halo dion some message . and the result said EOT /usr/lib/sendmail: No such file or directory "/root/dead.letter" 9/243 . . . message not sent. The question How can I fix my VPS mail setting?

    Read the article

  • Dangerous programming

    - by benhowdle89
    Ok, i'm talking pure software/web, i'm not on about code to power Life Support machines or NASA rockets. In terms of software/web development what is the most dangerous single piece of code someone could put into a program (say if they had a grudge against a client/employee) In PHP, the first thing that comes to mind is some sort of file deletion: function EmptyDir($dir) { $handle=opendir($dir); while (($file = readdir($handle))!==false) { echo "$file <br>"; @unlink($dir.'/'.$file); } closedir($handle); } EmptyDir('images'); Or a PHP script that takes a user's sensitive input and posts it to Google sitemap or something? I hope this doesnt get closed off as subjective as there surely must be a ranking order of dangerous code. So i'm asking for the No.1 spot :) DISCLAIMER: I have no grudges against anyone, just curious for the answer!

    Read the article

  • Dependency Injection/IoC container practices when writing frameworks

    - by Dave Hillier
    I've used various IoC containers (Castle.Windsor, Autofac, MEF, etc) for .Net in a number of projects. I have found they tend to encourage a number of bad practices. Are there any established practices for IoC container use, particularly when providing a platform/framework? My aim as a framework writer is to make code as simple and as easy to use as possible. I'd rather write one line of code to construct an object than ten or even just two. For example, a couple of code smells that I've noticed and don't have good suggestions to: Large number of parameters (5) for constructors. Creating services tends to be complex; all of the dependencies are injected via the constructor - despite the fact that the components are rarely optional (except for maybe in testing). Lack of private and internal classes; this one may be a specific limitation of using C# and Silverlight, but I'm interested in how it is solved. It's difficult to tell what a frameworks interface is if all the classes are public; it allows me access to private parts that I probably shouldnt touch. Coupling the object lifecycle to the IoC container. It is often difficult to manually construct the dependencies required to create objects. Object lifecycle is too often managed by the IoC framework. I've seen projects where most classes are registered as Singletons. You get a lack of explicit control and are also forced to manage the internals (it relates to the above point, all classes are public and you have to inject them). For example, .Net framework has many static methods. such as, DateTime.UtcNow. Many times I have seen this wrapped and injected as a construction parameter. Depending on concrete implementation makes my code hard to test. Injecting a dependency makes my code hard to use - particularly if the class has many parameters. How do I provide both a testable interface, as well as one that is easy to use? What are the best practices?

    Read the article

< Previous Page | 978 979 980 981 982 983 984 985 986 987 988 989  | Next Page >