Search Results

Search found 17470 results on 699 pages for 'single quote'.

Page 313/699 | < Previous Page | 309 310 311 312 313 314 315 316 317 318 319 320  | Next Page >

  • Write Scheme data structures so they can be eval-d back in, or alternative

    - by Jesse Millikan
    I'm writing an application (A juggling pattern animator) in PLT Scheme that accepts Scheme expressions as values for some fields. I'm attempting to write a small text editor that will let me "explode" expressions into expressions that can still be eval'd but contain the data as literals for manual tweaking. For example, (4hss->sexp "747") is a function call that generates a legitimate pattern. If I eval and print that, it becomes (((7 3) - - -) (- - (4 2) -) (- (7 2) - -) (- - - (7 1)) ((4 0) - - -) (- - (7 0) -) (- (7 2) - -) (- - - (4 3)) ((7 3) - - -) (- - (7 0) -) (- (4 1) - -) (- - - (7 1))) which can be "read" as a string, but will not "eval" the same as the function. For this statement, of course, what I need would be as simple as (quote (((7 3... but other examples are non-trivial. This one, for example, contains structs which print as vectors: pair-of-jugglers ; --> (#(struct:hand #(struct:position -0.35 2.0 1.0) #(struct:position -0.6 2.05 1.1) 1.832595714594046) #(struct:hand #(struct:position 0.35 2.0 1.0) #(struct:position 0.6 2.0500000000000003 1.1) 1.308996938995747) #(struct:hand #(struct:position 0.35 -2.0 1.0) #(struct:position 0.6 -2.05 1.1) -1.3089969389957472) #(struct:hand #(struct:position -0.35 -2.0 1.0) #(struct:position -0.6 -2.05 1.1) -1.8325957145940461)) I've thought of at least three possible solutions, none of which I like very much. Solution A is to write a recursive eval-able output function myself for a reasonably large subset of the values that I might be using. There (probably...) won't be any circular references by the nature of the data structures used, so that wouldn't be such a long job. The output would end up looking like `(((3 0) (... ; ex 1 `(,(make-hand (make-position ... ; ex 2 Or even worse if I could't figure out how to do it properly with quasiquoting. Solution B would be to write out everything as (read (open-input-string "(big-long-s-expression)")) which, technically, solves the problem I'm bringing up but is... ugly. Solution C might be a different approach of giving up eval and using only read for parsing input, or an uglier approach where the s-expression is used as directly data if eval fails, but those both seem unpleasant compared to using scheme values directly. Undiscovered Solution D would be a PLT Scheme option, function or library I haven't located that would match Solution A. Help me out before I start having bad recursion dreams again.

    Read the article

  • Delphi / MySql : Problems escaping strings

    - by mawg
    N00b here, having problems escaping strings. I used the QuotedStr() function - shouldn't that be enough. Unfortunately, the string that I am trying to quote is rather messy, but I will post it here in case anyone wants to paste it into WinMerge or KDiff3, etc. I am trying to store an entire Delphi form into the database, rather than into a .DFM file. It has only one field, a TEdit edit box. The debugger shows the form as text as 'object Form1: TScriptForm'#$D#$A' Left = 0'#$D#$A' Top = 0'#$D#$A' Align = alClient'#$D#$A' BorderStyle = bsNone'#$D#$A' ClientHeight = 517'#$D#$A' ClientWidth = 993'#$D#$A' Color = clBtnFace'#$D#$A' Font.Charset = DEFAULT_CHARSET'#$D#$A' Font.Color = clWindowText'#$D#$A' Font.Height = -11'#$D#$A' Font.Name = 'MS Sans Serif''#$D#$A' Font.Style = []'#$D#$A' OldCreateOrder = False'#$D#$A' SaveProps.Strings = ('#$D#$A' 'Visible=False')'#$D#$A' PixelsPerInch = 96'#$D#$A' TextHeight = 13'#$D#$A' object Edit1: TEdit'#$D#$A' Left = 192'#$D#$A' Top = 64'#$D#$A' Width = 121'#$D#$A' Height = 21'#$D#$A' TabOrder = 8'#$D#$A' end'#$D#$A'end'#$D#$A before calling QuotedStr() and ''object Form1: TScriptForm'#$D#$A' Left = 0'#$D#$A' Top = 0'#$D#$A' Align = alClient'#$D#$A' BorderStyle = bsNone'#$D#$A' ClientHeight = 517'#$D#$A' ClientWidth = 993'#$D#$A' Color = clBtnFace'#$D#$A' Font.Charset = DEFAULT_CHARSET'#$D#$A' Font.Color = clWindowText'#$D#$A' Font.Height = -11'#$D#$A' Font.Name = ''MS Sans Serif'''#$D#$A' Font.Style = []'#$D#$A' OldCreateOrder = False'#$D#$A' SaveProps.Strings = ('#$D#$A' ''Visible=False'')'#$D#$A' PixelsPerInch = 96'#$D#$A' TextHeight = 13'#$D#$A' object Edit1: TEdit'#$D#$A' Left = 192'#$D#$A' Top = 64'#$D#$A' Width = 121'#$D#$A' Height = 21'#$D#$A' TabOrder = 8'#$D#$A' end'#$D#$A'end'#$D#$A''' afterwards. The strange thing is that my complete command 'INSERT INTO designerFormDfm(designerFormDfmText) VALUES ("'object Form1: TScriptForm'#$D#$A' Left = 0'#$D#$A' Top = 0'#$D#$A' Align = alClient'#$D#$A' BorderStyle = bsNone'#$D#$A' ClientHeight = 517'#$D#$A' ClientWidth = 993'#$D#$A' Color = clBtnFace'#$D#$A' Font.Charset = DEFAULT_CHARSET'#$D#$A' Font.Color = clWindowText'#$D#$A' Font.Height = -11'#$D#$A' Font.Name = ''MS Sans Serif'''#$D#$A' Font.Style = []'#$D#$A' OldCreateOrder = False'#$D#$A' SaveProps.Strings = ('#$D#$A' ''Visible=False'')'#$D#$A' PixelsPerInch = 96'#$D#$A' TextHeight = 13'#$D#$A' object Edit1: TEdit'#$D#$A' Left = 192'#$D#$A' Top = 64'#$D#$A' Width = 121'#$D#$A' Height = 21'#$D#$A' TabOrder = 8'#$D#$A' end'#$D#$A'end'#$D#$A''");' executes in a MySql console, but not from Delphi, where I pass that command as parameter command to a function which ADOCommand.CommandText := command; ADOCommand.CommandType := cmdText; ADOCommand.Execute(); I can only assume that I am having problems escpaing sequences which contain single quotes (and QuotedStr() doesn't seem to escape backslahes(?!)) What am I doing that is obviously, glaringly wrong?

    Read the article

  • Thinking about introducing PHP/MySQL into a .NET/SQL Server environment. Thoughts?

    - by abszero
    I posted this over at reddit but it didn't gain any momentum. So here is what is going on: our company was recently purchased by another web shop and I was promoted to head of development here in our office. Our office is completely .NET/SQL Server and the company who purchased us is a *nix/PHP/MySQL shop. Now several of our large clients who are on the .NET platform are up for complete rewrites (the sites are from '04 and are running on the 1.x framework.) While reviewing the proposal for one client with my superior I came across a pretty extensive module which would require several hundred man hours to complete and voiced some concern about it in relation to the quote. One of the guys from the PHP group happen to hear this and told me of a module that they (PHP Group) use in Drupal that does exactly what the proposal in front of me was describing and it only took, at most, 8 hours to completely setup / configure. My superior suggested that I take a look at Drupal and the module in question over the weekend but stressed that we should only go that route if it really made sense. So this weekend I spun up a CentOS instance in VirtualBox and started playing around with Drupal. I am still fleshing it out so don't have a solid opinion on it just yet. Anyway I have some questions / fears that I was hoping progit could help me out in! Has anyone had experience doing this and, if so, how did it turn out? I am completely ignorant to what IDE's (if any) are available to for PHP. The last time I worked with PHP it was in Notepad and that was less than intuitive. So is there are more intuitive IDE out there for PHP dev? I don't want to scare my .NET guys. Since the merger all of our new business clients that have had relatively small websites have gone on Drupal with the larger sites going on .NET. My concern is that if they see a large site go onto Drupal that they might start getting anxious and start handing out their resumes. For the foreseeable future there are no plans to liquidate the .NET platform and really we can't just from a support standpoint. What would be the best way to approach this? Any other helpful info? Thanks!

    Read the article

  • C strange array behaviour

    - by LukeN
    After learning that both strncmp is not what it seems to be and strlcpy not being available on my operating system (Linux), I figured I could try and write it myself. I found a quote from Ulrich Drepper, the libc maintainer, who posted an alternative to strlcpy using mempcpy. I don't have mempcpy either, but it's behaviour was easy to replicate. First of, this is the testcase I have #include <stdio.h> #include <string.h> #define BSIZE 10 void insp(const char* s, int n) { int i; for (i = 0; i < n; i++) printf("%c ", s[i]); printf("\n"); for (i = 0; i < n; i++) printf("%02X ", s[i]); printf("\n"); return; } int copy_string(char *dest, const char *src, int n) { int r = strlen(memcpy(dest, src, n-1)); dest[r] = 0; return r; } int main() { char b[BSIZE]; memset(b, 0, BSIZE); printf("Buffer size is %d", BSIZE); insp(b, BSIZE); printf("\nFirst copy:\n"); copy_string(b, "First", BSIZE); insp(b, BSIZE); printf("b = '%s'\n", b); printf("\nSecond copy:\n"); copy_string(b, "Second", BSIZE); insp(b, BSIZE); printf("b = '%s'\n", b); return 0; } And this is its result: Buffer size is 10 00 00 00 00 00 00 00 00 00 00 First copy: F i r s t b = 46 69 72 73 74 00 62 20 3D 00 b = 'First' Second copy: S e c o n d 53 65 63 6F 6E 64 00 00 01 00 b = 'Second' You can see in the internal representation (the lines insp() created) that there's some noise mixed in, like the printf() format string in the inspection after the first copy, and a foreign 0x01 in the second copy. The strings are copied intact and it correctly handles too long source strings (let's ignore the possible issue with passing 0 as length to copy_string for now, I'll fix that later). But why are there foreign array contents (from the format string) inside my destination? It's as if the destination was actually RESIZED to match the new length.

    Read the article

  • Little CSS problem with Auto height and nested div's

    - by GeekDrop.com
    So I'm finally learning my way around CSS more and have run into a small problem. I have a container div, with a few divs inside of it, one of them is a bit if text (which can be a random height) and an image that will have a MAX height of 200px. I am using a dotted/colored background behind them that needs to auto expand to the height of whichever is the tallest, either the text or the image. Right now when i use height:auto on the container div it works perfect for the random height text: Example Screenshot But it's only adjusting according to the text's height; if the image is taller than the text, the image overflows the bottom of the background dotted/colored box. Example Screenshot The CSS I'm using currently is this: h1 div#like_detailed { margin: 0; font-size: 1.1em; width: 700px; } #details-image img { border: none; clear: left; float: right; margin: -45px 0 0 0; max-height: 200px; padding: 0 7px 0 10px; } #deets-container { background-color: #FEF; border: #190AE7 1px dotted; height: auto; margin-top: 0; margin-bottom: 30px; padding-top: 10px; padding-right: 10px; padding-left: 10px; padding-bottom: 0; } And the HTML for it is this: <div id="deets-container" class="rounded"> <!-- Button --> <div class="likebtnframe">(some code)</div> <!-- Button --> <div class="tweetbtnframe">(some code)</div> <!-- Button --> <ul id="share"> <li><a name="share">(some code)</a></li> </ul> <!-- Submitted By --> <div class="submitter_detailed"><span class="submitter-color smalltext">(some code)</span> (some code)</div> <!-- Image --> <div id="**details-image**">(some code)</div> <!-- Like / Quote --> <h1 id="**like_detailed**">(some code)</h1> </div> I have a feeling this is pretty easy but I'm running out of time to sort it out on my own. Anyone?

    Read the article

  • Delphi / MySql : Problems escpaing strings

    - by mawg
    N00b here, having problems escaping strings. I used the QuotedStr() function - shouldn't that be enough. Unfortunately, the string that I am trying to quote is rather messy, but I will post it here in case anyone wants to paste it into WinMerge or KDiff3, etc. I am trying to store an entire Delphi form into the database, rather than into a .DFM file. It has only one field, a TEdit edit box. The debugger shows the form as text as 'object Form1: TScriptForm'#$D#$A' Left = 0'#$D#$A' Top = 0'#$D#$A' Align = alClient'#$D#$A' BorderStyle = bsNone'#$D#$A' ClientHeight = 517'#$D#$A' ClientWidth = 993'#$D#$A' Color = clBtnFace'#$D#$A' Font.Charset = DEFAULT_CHARSET'#$D#$A' Font.Color = clWindowText'#$D#$A' Font.Height = -11'#$D#$A' Font.Name = 'MS Sans Serif''#$D#$A' Font.Style = []'#$D#$A' OldCreateOrder = False'#$D#$A' SaveProps.Strings = ('#$D#$A' 'Visible=False')'#$D#$A' PixelsPerInch = 96'#$D#$A' TextHeight = 13'#$D#$A' object Edit1: TEdit'#$D#$A' Left = 192'#$D#$A' Top = 64'#$D#$A' Width = 121'#$D#$A' Height = 21'#$D#$A' TabOrder = 8'#$D#$A' end'#$D#$A'end'#$D#$A before calling QuotedStr() and ''object Form1: TScriptForm'#$D#$A' Left = 0'#$D#$A' Top = 0'#$D#$A' Align = alClient'#$D#$A' BorderStyle = bsNone'#$D#$A' ClientHeight = 517'#$D#$A' ClientWidth = 993'#$D#$A' Color = clBtnFace'#$D#$A' Font.Charset = DEFAULT_CHARSET'#$D#$A' Font.Color = clWindowText'#$D#$A' Font.Height = -11'#$D#$A' Font.Name = ''MS Sans Serif'''#$D#$A' Font.Style = []'#$D#$A' OldCreateOrder = False'#$D#$A' SaveProps.Strings = ('#$D#$A' ''Visible=False'')'#$D#$A' PixelsPerInch = 96'#$D#$A' TextHeight = 13'#$D#$A' object Edit1: TEdit'#$D#$A' Left = 192'#$D#$A' Top = 64'#$D#$A' Width = 121'#$D#$A' Height = 21'#$D#$A' TabOrder = 8'#$D#$A' end'#$D#$A'end'#$D#$A''' afterwards. The strange thing is that my complete command 'INSERT INTO designerFormDfm(designerFormDfmText) VALUES ("'object Form1: TScriptForm'#$D#$A' Left = 0'#$D#$A' Top = 0'#$D#$A' Align = alClient'#$D#$A' BorderStyle = bsNone'#$D#$A' ClientHeight = 517'#$D#$A' ClientWidth = 993'#$D#$A' Color = clBtnFace'#$D#$A' Font.Charset = DEFAULT_CHARSET'#$D#$A' Font.Color = clWindowText'#$D#$A' Font.Height = -11'#$D#$A' Font.Name = ''MS Sans Serif'''#$D#$A' Font.Style = []'#$D#$A' OldCreateOrder = False'#$D#$A' SaveProps.Strings = ('#$D#$A' ''Visible=False'')'#$D#$A' PixelsPerInch = 96'#$D#$A' TextHeight = 13'#$D#$A' object Edit1: TEdit'#$D#$A' Left = 192'#$D#$A' Top = 64'#$D#$A' Width = 121'#$D#$A' Height = 21'#$D#$A' TabOrder = 8'#$D#$A' end'#$D#$A'end'#$D#$A''");' executes in a MySql console, but not from Delphi, where I pass that command as parameter command to a function which ADOCommand.CommandText := command; ADOCommand.CommandType := cmdText; ADOCommand.Execute(); I can only assume that I am having problems escpaing sequences which contain single quotes (and QuotedStr() doesn't seem to escape backslahes(?!)) What am I doing that is obviously, glaringly wrong?

    Read the article

  • Patterns for dynamic CMS components (event driven?)

    - by CitrusTree
    Sorry my title is not great, this is my first real punt at moving 100% to OO as I've been procedural for more years than I can remember. I'm finding it hard to understand if what I'm trying to do is possible. Depending on people's thoughts on the 2 following points, I'll go down that route. The CMS I'm putting together is quote small, however focuses very much on different types of content. I could easily use Drupal which I'm very comfortable with, but I want to give myself a really good reasons to move myself into design patterns / OO-PHP 1) I have created a base 'content' class which I wish to be able to extend to handle different types of content. The base class, for example, handles HTML content, and extensions might handle XML or PDF output instead. On the other hand, at some point I may wish to extend the base class for a given project completely. I.e. if class 'content-v2' extended class 'content' for that site, any calls to that class should actually call 'content-v2' instead. Is that possible? If the code instantiates an object of type 'content' - I actually want it to instantiate one of type 'content-v2'... I can see how to do it using inheritance, but that appears to involve referring to the class explicitly, I can't see how to link the class I want it to use instead dynamically. 2) Secondly, the way I'm building this at the moment is horrible, I'm not happy with it. It feels very linear indeed - i.e. get session details get content build navigation theme page publish. To do this all the objects are called 1-by-1 which is all very static. I'd like it to be more dynamic so that I can add to it at a later date (very closely related to first question). Is there a way that instead of my orchestrator class calling all the other classes 1-by-1, then building the whole thing up at the end, that instead each of the other classes can 'listen' for specific events, then at the applicable point jump in and do their but? That way the orchestrator class would not need to know what other classes were required, and call them 1-by-1. Sorry if I've got this all twisted in my head. I'm trying to build this so it's really flexible.

    Read the article

  • â?? in my hmtl after purify

    - by mmcgrail
    I have a database the i am rebuilding the table structure was crap so I'm porting some of the data from one table to another. This data appears to have been copy past from MSO product so as I'm getting the data I clean it up with htmlpurifier and some alittle str_replace in php here the clean function function clean_html($html) { $config = HTMLPurifier_Config::createDefault(); $config->set('AutoFormat','RemoveEmpty',true); $config->set('HTML','AllowedAttributes','href,src'); $config->set('HTML','AllowedElements','p,em,strong,a,ul,li,ol,img'); $purifier = new HTMLPurifier($config); $html = $purifier->purify($html); $html = str_replace('&nbsp;',' ',$html); $html = str_replace("\r",'',$html); $html = str_replace("\n",'',$html); $html = str_replace("\t",'',$html); $html = str_replace(' ',' ',$html); $html = str_replace('<p> </p>','',$html); $html = str_replace(chr(160),' ',$html); return trim($html); } but when I put the results into my new table and out put them to the ckeditor I get those three characters. I then have a javascript function that is called to remove special characters from the content of the ckeditor too. it doesn't clean it either function remove_special(str) { var rExps=[ /[\xC0-\xC2]/g, /[\xE0-\xE2]/g, /[\xC8-\xCA]/g, /[\xE8-\xEB]/g, /[\xCC-\xCE]/g, /[\xEC-\xEE]/g, /[\xD2-\xD4]/g, /[\xF2-\xF4]/g, /[\xD9-\xDB]/g, /[\xF9-\xFB]/g, /\xD1/,/\xF1/g, "/[\u00a0|\u1680|[\u2000-\u2009]|u200a|\u200b|\u2028|\u2029|\u202f|\u205f|\u3000|\xa0]/g", /\u000b/g,'/[\u180e|\u000c]/g', /\u2013/g, /\u2014/g, /\xa9/g,/\xae/g,/\xb7/g,/\u2018/g,/\u2019/g,/\u201c/g,/\u201d/g,/\u2026/g]; var repChar=['A','a','E','e','I','i','O','o','U','u','N','n',' ','\t','','-','--','(c)','(r)','*',"'","'",'"','"','...']; for(var i=0; i<rExps.length; i++) { str=str.replace(rExps[i],repChar[i]); } for (var x = 0; x < str.length; x++) { charcode = str.charCodeAt(x); if ((charcode < 32 || charcode > 126) && charcode !=10 && charcode != 13) { str = str.replace(str.charAt(x), ""); } } return str; } Does anyone know off hand what I need to do to get rid of them. I think they may be some sort of quote

    Read the article

  • Database advantages? Access, MySQL, msSQL, or any others?

    - by JimZ
    Dear all Stackoverflowers, I just started to learn programming and now I'm putting this question online based on a quote: no question is silly My work needs to develop a order system based on web, which wants a database system. Since using Excel for years as a general office user, I naturally turn this to Access. However, most people say Access is very limited comparing to MySQL or MSSQL, or any other more professional database system. But after developing some functions for my company's order system, I really find Access can fulfill my request. And I also tried MSSQL to develop, which I found it not quite convenient to use. I have searched in stackoverflow and find no general answer about my doubt. Now I am sincerely hoping some experienced and professional developers could clear my doubts. Now I'm listing some Access advantages, which I don't think other database system have. I hope you could help me also find these advantages in others. 1. Access is portable, I can just copy a xxx.accdb file to my company and continue with development. 2. Access is easy to generate helpful table, for example, it will automatically generate a field that can automatically count, could be used as primary key value. 3. it is more compatable with Excel, to display and filter data. 4. importantly, it nerely needs no environment to setup, just needs MS Office to be installed. ............others However, I also find some points that MSSQL is advantaged: 1. security reasons 2. easy to backup, ( just use BACKUP..... sql statement to do it) 3. can edit stored procedure to save some functions to database ...............others specifically, I wish some friends could tell me how to make other database portable? since I usually work both at home and in office. It's a headache to move MSSQL work to my office, since the version of MSSQL is not the same. Thank you all and best regards, :)

    Read the article

  • Outlook 2011 Contact Import from CSV with Notes containing new lines / cr / lf

    - by Paul Hargreaves
    I'm trying to import several thousand contacts into Outlook 2011 for Mac. Everything is working well except the Notes field as I cannot figure out how to get new lines / carriage returns into it. There is no documentation for the exact format that Outlook supports. After searching the web and experimenting I have tried: Creating a single contact in Outlook with Notes containing several lines of text. I then export the contact to a csv, deleting the contact in Outlook, then re-import. All lines in Notes merge together :-/ Following tips I found such as containing new lines around quotes. e.g. http://creativyst.com/Doc/Articles/CSV/CSV01.htm (search for line-break) Switching the CSV format from DOS to Unix, experimenting using manually injected ctrl-characters such as ^M etc. I would include an example export/import but unfortunately the the new breaks included do not work well in a SU code block.

    Read the article

  • CertificateServicesClient-CredentialRoaming error 1005

    - by PVitt
    We have a Microsoft Team Foundation Server (Single Server Installation, i.e. Microsoft SQL Server 2008, Microsoft Windows SharePonint Services 3.0) installed on a Windows Server 2008 machine. The TFS works fine, but there are error events logged frequently: Log Name: Application Source: Microsoft-Windows-CertificateServicesClient-CredentialRoaming Event ID: 1005 Level: Error Description: Certificate Services Client: Credential Roaming failed to write to the Active Directory. Error code 5 (Access is denied.) The problem is clear (the error message is quite precise) but I don't have a clue how to fix it! Where has the access to be granted? What permissions have to be set?

    Read the article

  • CertMgr fails trying to import an SPC file

    - by nsr81
    We have an SPC files which came with the Cisco IP Communicator installer. It needs to be imported into the localMachine ROOT store. However, which the certmgr.exe is run against this SPC file, it errors out. Doesn't matter if it's run from within the installer or manually. The commands I've tried using are: certmgr.exe -add -all CDPcredentials.spc -s -r localMachine root The result displayed is: Error: Failed to save to the destination store CertMgr Failed There is no other information, no log file, nothing in the eventviewer. I's almost as if the ROOT store is in a read-only state. I would also like to point out that I'm able to import single certificates. Just not an SPC files, which contains multiple certificates. I have also tried different versions of the CertMgr utility. Running on Windows 7 Enterprise 64bit. Any assistance would be appreciated.

    Read the article

  • How to suppress an unwanted external Autodiscover lookup?

    - by chris
    In a small network with Exchange 2007, when starting Outlook 2010 (and once in a while afterwards), users get a prompt to confirm that it's safe to get account configuration information from cpanelemaildiscovery.cpanel.net/autodiscover/autodiscover.xml (I could read in a couple of forums that there is a bug in cpanel, but that's beside the point.) I'm puzzled because I can't find any autodiscover DNS entries anywhere, neither internally nor externally. The only hint is that we use an external hosting company for our website and for one single email address, which runs on cpanel. So I guess that Outlook makes an external DNS query to test all entries? It reates a lot of confusion for the users and frankly I'm not too happy that the external hosting company gets contacted by all our users. How can I suppress this behavior? Thanks

    Read the article

  • The OLE DB provider "SQLNCLI10.1" has not been registered.; 42000.

    - by lankylad
    I have a SQL Server 2008 Analysis Services Project. In the Data Source View I have a Named Query which references a single Data Source containing three tables. The Project processes successfully and the cube can be browsed. I recently added a second Data Source to the Data Source View and linked a table to the original Named Query. When I try to process the project, I get the message: OLE DB error: OLE DB or ODBC error: The OLE DB provider "SQLNCLI10.1" has not been registered.; 42000. The Connection String for both Data Sources uses SQLNCLI10.1

    Read the article

  • Unable to update the EntitySet because it has a DefiningQuery and no &lt;UpdateFunction&gt; element

    - by Harish Ranganathan
    When working with ADO.NET Entity Data Model, its often common that we generate entity schema for more than a single table from our Database.  With Entity Model generation automated with Visual Studio support, it becomes even tempting to create and work entity models to achieve an object mapping relationship. One of the errors that you might hit while trying to update an entity set either programmatically using context.SaveChanges or while using the automatic insert/update code generated by GridView etc., is “Unable to update the EntitySet <EntityName> because it has a DefiningQuery and no <UpdateFunction> element exists in the <ModificationFunctionMapping> element to support the current operation” While the description is pretty lengthy, the immediate thing that would come to our mind is to open our the entity model generated code and see if you can update it accordingly. However, the first thing to check if that, if the Entity Set is generated from a table, whether the Table defines a primary key.  Most of the times, we create tables with primary keys.  But some reference tables and tables which don’t have a primary key cannot be updated using the context of Entity and hence it would throw this error.  Unless it is a View, in which case, the default model is read-only, most of the times the above error occurs since there is no primary key defined in the table. There are other reasons why this error could popup which I am not going into for the sake of simplicity of the post.  If you find something new, please feel free to share it in comments. Hope this helps. Cheers !!!

    Read the article

  • ILMerge - Unresolved assembly reference not allowed: System.Core

    - by Steve Michelotti
    ILMerge is a utility which allows you the merge multiple .NET assemblies into a single binary assembly more for convenient distribution. Recently we ran into problems when attempting to use ILMerge on a .NET 4 project. We received the error message: An exception occurred during merging: Unresolved assembly reference not allowed: System.Core.     at System.Compiler.Ir2md.GetAssemblyRefIndex(AssemblyNode assembly)     at System.Compiler.Ir2md.GetTypeRefIndex(TypeNode type)     at System.Compiler.Ir2md.VisitReferencedType(TypeNode type)     at System.Compiler.Ir2md.GetMemberRefIndex(Member m)     at System.Compiler.Ir2md.PopulateCustomAttributeTable()     at System.Compiler.Ir2md.SetupMetadataWriter(String debugSymbolsLocation)     at System.Compiler.Ir2md.WritePE(Module module, String debugSymbolsLocation, BinaryWriter writer)     at System.Compiler.Writer.WritePE(String location, Boolean writeDebugSymbols, Module module, Boolean delaySign, String keyFileName, String keyName)     at System.Compiler.Writer.WritePE(CompilerParameters compilerParameters, Module module)     at ILMerging.ILMerge.Merge()     at ILMerging.ILMerge.Main(String[] args) It turns out that this issue is caused by ILMerge.exe not being able to find the .NET 4 framework by default. The answer was ultimately found here. You either have to use the /lib option to point to your .NET 4 framework directory (e.g., “C:\Windows\Microsoft.NET\Framework\v4.0.30319” or “C:\Windows\Microsoft.NET\Framework64\v4.0.30319”) or just use an ILMerge.exe.config file that looks like this: 1: <configuration> 2: <startup useLegacyV2RuntimeActivationPolicy="true"> 3: <requiredRuntime safemode="true" imageVersion="v4.0.30319" version="v4.0.30319"/> 4: </startup> 5: </configuration> This was able to successfully resolve my issue.

    Read the article

  • Sharepoint 2010, 404 error after installation

    - by Tommy Jakobsen
    Running Windows Server 2008 Standard R2, SQL Server 2008 Enterprise, Team Foundation Server 2010, I installed Sharepoint Server 2010 (single server). It installed correctly, and the wizard configured it without errors. When accessing the sharepoint server through http://localhost/ I get a 404 error. I also get a 404 when trying to access the admin interface on port 42620. Sharepoint, TFS and Reporting services are the only application on my IIS. NOT sharing the same port, so that can't be the error. Do you have any ideas what the problem can be? Is there some way that I can debug this?

    Read the article

  • SQL SERVER – Four Posts on Removing the Bookmark Lookup – Key Lookup

    - by pinaldave
    In recent times I have observed that not many people have proper understanding of what is bookmark lookup or key lookup. Increasing numbers of the questions tells me that this is something developers are encountering every single day but have no idea how to deal with it. I have previously written three articles on this subject. I want to point all of you looking for further information on the same post. SQL SERVER – Query Optimization – Remove Bookmark Lookup – Remove RID Lookup – Remove Key Lookup SQL SERVER – Query Optimization – Remove Bookmark Lookup – Remove RID Lookup – Remove Key Lookup – Part 2 SQL SERVER – Query Optimization – Remove Bookmark Lookup – Remove RID Lookup – Remove Key Lookup – Part 3 SQL SERVER – Interesting Observation – Execution Plan and Results of Aggregate Concatenation Queries In one of my recent class we had in depth conversation about what are the alternative of creating covering indexes to remove the bookmark lookup. I really want to this question open to all of you and see what community thinks about the same. Is there any other way then creating covering index or included index to remove his expensive keylookup? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Backup and Restore, SQL Index, SQL Optimization, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQLAuthority News, SQLServer, T SQL, Technology

    Read the article

  • SFML: Generate a background image

    - by BlackMamba
    I want to generate a background, which is used in the game, on every instance of the game based on certain conditions. To do so, I'm using a sf::RenderTexture and a sf::Texture like this: sf::RenderTexture image; std::vector<sf::Texture> textures; sf::Texture texture; // instantiating the vector of textures and the image not shown here for (int i = 0; i < certainSize; ++i) { if(certainContition) { texture.setTexture("file"); texture.setPosition(pos1, pos2); } else { ... } image.draw(texture); } The point here is that I draw single textures on a sf::RenderTexture, but because textures always are on the graphic cards memory, I can't exceed a certain map size which I have to. I also considered using an sf::Image, but I can't find a way to draw an image (i.e. a texture) to it. The third way I found was using an sf::VertexArray, but this seems to be a bit too low-level for my rather simple purposes. So is there a common way to dynamically generate a background image based on other existing images?

    Read the article

  • In 12.04: Failed to load session 'ubuntu' [closed]

    - by Stéphane
    Possible Duplicate: There's an issue with an Alpha/Beta Release of Ubuntu, what should I do? I'm using 12.04 beta. Today I was prompted to install some updates, which I did, followed by a reboot. On reboot, X starts, but all I see is a single dialog window in the middle of the screen with the text: Failed to load session 'ubuntu' I don't even see the mouse, or the login screen, just this 1 line of text. When I hit CTRL+ALT+F1 to run dist-upgrade from a command prompt, I get this: The following packages have been kept back: libgnome-desktop-3-2 So to see why it was kept back, I tried the following: $ sudo apt-get install libgnome-desktop-3-2 ... The following packages have unmet dependencies: libgnome-desktop-3-2 : Depends: gnome-desktop3-data (= 3.3.92-0ubuntu1) but 3.3.91-0ubuntu2 is to be installed E: Unable to correct problems, you have held broken packages. Anyone else seeing this, or have an idea how to fix it? If you're going to close it as a duplicate, can you please link to the duplicate question?

    Read the article

  • Processing Email in Outlook

    - by Daniel Moth
    A. Why Goal 1 = Help others: Have at most a 24-hour response turnaround to internal (from colleague) emails, typically achieving same day response. Goal 2 = Help projects: Not to implicitly pass/miss an opportunity to have impact on electronic discussions around any project on the radar. Not achieving goals 1 & 2 = Colleagues stop relying on you, drop you off conversations, don't see you as a contributing resource or someone that cares, you are perceived as someone with no peripheral vision. Note this is perfect if all you are doing is cruising at your job, trying to fly under the radar, with no ambitions of having impact beyond your absolute minimum 'day job'. B. DON'T: Leave unread email lurking around Don't: Receive or process all incoming emails in a single folder ('inbox' or 'unread mail'). This is actually possible if you receive a small number of emails (e.g. new to the job, not working at a company like Microsoft). Even so, with (your future) success at any level (company, community) comes large incoming email, so learn to deal with it. With large volumes, it is best to let the system help you by doing some categorization and filtering on your behalf (instead of trying to do that in your head as you process the single folder). See later section on how to achieve this. Don't: Leave emails as 'unread' (or worse: read them, then mark them as unread). Often done by individuals who think they possess super powers ("I can mentally cache and distinguish between the emails I chose not to read, the ones that are actually new, and the ones I decided to revisit in the future; the fact that they all show up the same (bold = unread) does not confuse me"). Interactions with this super-powered individuals typically end up with them saying stuff like "I must have missed that email you are talking about (from 2 weeks ago)" or "I am a bit behind, so I haven't read your email, can you remind me". TIP: The only place where you are "allowed" unread email is in your Deleted Items folder. Don't: Interpret a read email as an email that has been processed. Doing that, means you will always end up with fake unread email (that you have actually read, but haven't dealt with completely so you then marked it as unread) lurking between actual unread email. Another side effect is reading the email and making a 'mental' note to action it, then leaving the email as read, so the only thing left to remind you to carry out the action is… you. You are not super human, you will forget. This is a key distinction. Reading (or even scanning) a new email, means you now know what needs to be done with it, in order for it to be truly considered processed. Truly processing an email is to, for example, write an email of your own (e.g. to reply or forward), or take a non-email related action (e.g. create calendar entry, do something on some website), or read it carefully to gain some knowledge (e.g. it had a spec as an attachment), or keep it around as reference etc. 'Reading' means that you know what to do, not that you have done it. An email that is read is an email that is triaged, not an email that is resolved. Sometimes the thing that needs to be done based on receiving the email, you can (and want) to do immediately after reading the email. That is fine, you read the email and you processed it (typically when it takes no longer than X minutes, where X is your personal tolerance – mine is roughly 2 minutes). Other times, you decide that you don't want to spend X minutes at that moment, so after reading the email you need a quick system for "marking" the email as to be processed later (and you still leave it as 'read' in outlook). See later section for how. C. DO: Use Outlook rules and have multiple folders where incoming email is automatically moved to Outlook email rules are very powerful and easy to configure. Use them to automatically file email into folders. Here are mine (note that if a rule catches an email message then no further rules get processed): "personal" Email is either personal or business related. Almost all personal email goes to my gmail account. The personal emails that end up on my work email account, go to a dedicated folder – that is achieved via a rule that looks at the email's 'From' field. For those that slip through, I use the new Outlook 2010  quick step of "Conversation To Folder" feature to let the slippage only occur once per conversation, and then update my rules. "External" and "ViaBlog" The remaining external emails either come from my blog (rule on the subject line) or are unsolicited (rule on the domain name not being microsoft) and they are filed accordingly. "invites" I may do a separate blog post on calendar management, but suffice to say it should be kept up to date. All invite requests end up in this folder, so that even if mail gets out of control, the calendar can stay under control (only 1 folder to check). I.e. so I can let the organizer know why I won't be attending their meeting (or that I will be). Note: This folder is the only one that shows the total number of items in it, instead of the total unread. "Inbox" The only email that ends up here is email sent TO me and me only. Note that this is also the only email that shows up above the systray icon in the notification toast – all other emails cannot interrupt. "ToMe++" Email where I am on the TO line, but there are other recipients as well (on the TO or CC line). "CC" Email where I am on the CC line. I need to read these, but nobody is expecting a response or action from me so they are not as urgent (and if they are and follow up with me, they'll receive a link to this). "@ XYZ" Emails to aliases that are about projects that I directly work on (and I wasn't on the TO or CC line, of course). Test: these projects are in my commitments that I get measured on at the end of the year. "Z Mass" and subfolders under it per distribution list (DL) Emails to aliases that are about topics that I am interested in, but not that I formally own/contribute to. Test: if I unsubscribed from these aliases, nobody could rightfully complain. "Admin" folder, which resides under "Z Mass" folder Emails to aliases that I was added typically by an admin, e.g. broad emails to the floor/group/org/building/division/company that I am a member of. "BCC" folder, which resides under "Z Mass" Emails where I was not on the TO or the CC line explicitly and the alias it was sent to is not one I explicitly subscribed to (or I have been added to the BCC line, which I briefly touched on in another post). When there are only a few quick minutes to catch up on email, read as much as possible from these folders, in this order: Invites, Inbox, ToMe++. Only when these folders are all read (remember that doesn't mean that each email in them has been fully dealt with), we can move on to the @XYZ and then the CC folders. Only when those are read we can go on to the remaining folders. Note that the typical flow in the "Z Mass" subfolders is to scan subject lines and use the new Ctrl+Delete Outlook 2010 feature to ignore conversations. D. DO: Use Outlook Search folders in combination with categories As you process each folder, when you open a new email (i.e. click on it and read it in the preview pane) the email becomes read and stays read and you have to decide whether: It can take 2 minutes to deal with for good, right now, or It will take longer than 2 minutes, so it needs to be postponed with a clear next step, which is one of ToReply – there may be intermediate action steps, but ultimately someone else needs to receive email about this Action – no email is required, but I need to do something ReadLater – no email is required from the quick scan, but this is too long to fully read now, so it needs to be read it later WaitingFor – the email is informing of an intermediate status and 'promising' a future email update. Need to track. SomedayMaybe – interesting but not important, non-urgent, non-time-bound information. I may want to spend part of one of my weekends reading it. For all these 'next steps' use Outlook categories (right click on the email and assign category, or use shortcut key). Note that I also use category 'WaitingFor' for email that I send where I am expecting a response and need to track it. Create a new search folder for each category (I dragged the search folders into my favorites at the top left of Outlook, above my inboxes). So after the activity of reading/triaging email in the normal folders (where the email arrived) is done, the result is a bunch of emails appearing in the search folders (configure them to show the total items, not the total unread items). To actually process email (that takes more than 2 minutes to deal with) process the search folders, starting with ToReply and Action. E. DO: Get into a Routine Now you have a system in place, get into a routine of using it. Here is how I personally use mine, but this part I keep tweaking: Spend short bursts of time (between meetings, during boring but mandatory meetings and, in general, 2-4 times a day) aiming to have no unread emails (and in the process deal with some emails that take less than 2 minutes). Spend around 30 minutes at the end of each day processing most urgent items in search folders. Spend as long as it takes each Friday (or even the weekend) ensuring there is no unnecessary email baggage carried forward to the following week. F. Other resources Official Outlook help on: Create custom actions rules, Manage e-mail messages with rules, creating a search folder. Video on ignoring conversations (Ctrl+Del). Official blog post on Quick Steps and in particular the Move Conversation to folder. If you've read "Getting Things Done" it is very obvious that my approach to email management is driven by GTD. A very similar approach was described previously by ScottHa (also influenced by GTD), worth reading here. He also described how he sets up 2 outlook rules ('invites' and 'external') which I also use – worth reading that too. Comments about this post welcome at the original blog.

    Read the article

  • MS Access Premiere Products Exercise

    - by rynwtts
    I am working with Microsoft Access, Premiere Products Exercises for a college course. I can't seem to get past a specific question. We are working with DBDL and E-R Diagrams. The question is here. Indicate the changes you need to make to the design of the Premiere Products database to support the following situation. A customer is not necessarily represented by a single sales rep but can be represented by several sales reps. when a customer places an order, the sales rep who gets the commission on the order must be one of the collection of sales reps who represents the customer. In the database already each customer is represented by a sales rep. Which yields a one to one relationship. I need to enable a customer to have several sales reps, and make it so that only those sales rep will be eligible for commission upon each order.

    Read the article

  • Processing Email in Outlook

    - by Daniel Moth
    A. Why Goal 1 = Help others: Have at most a 24-hour response turnaround to internal (from colleague) emails, typically achieving same day response. Goal 2 = Help projects: Not to implicitly pass/miss an opportunity to have impact on electronic discussions around any project on the radar. Not achieving goals 1 & 2 = Colleagues stop relying on you, drop you off conversations, don't see you as a contributing resource or someone that cares, you are perceived as someone with no peripheral vision. Note this is perfect if all you are doing is cruising at your job, trying to fly under the radar, with no ambitions of having impact beyond your absolute minimum 'day job'. B. DON'T: Leave unread email lurking around Don't: Receive or process all incoming emails in a single folder ('inbox' or 'unread mail'). This is actually possible if you receive a small number of emails (e.g. new to the job, not working at a company like Microsoft). Even so, with (your future) success at any level (company, community) comes large incoming email, so learn to deal with it. With large volumes, it is best to let the system help you by doing some categorization and filtering on your behalf (instead of trying to do that in your head as you process the single folder). See later section on how to achieve this. Don't: Leave emails as 'unread' (or worse: read them, then mark them as unread). Often done by individuals who think they possess super powers ("I can mentally cache and distinguish between the emails I chose not to read, the ones that are actually new, and the ones I decided to revisit in the future; the fact that they all show up the same (bold = unread) does not confuse me"). Interactions with this super-powered individuals typically end up with them saying stuff like "I must have missed that email you are talking about (from 2 weeks ago)" or "I am a bit behind, so I haven't read your email, can you remind me". TIP: The only place where you are "allowed" unread email is in your Deleted Items folder. Don't: Interpret a read email as an email that has been processed. Doing that, means you will always end up with fake unread email (that you have actually read, but haven't dealt with completely so you then marked it as unread) lurking between actual unread email. Another side effect is reading the email and making a 'mental' note to action it, then leaving the email as read, so the only thing left to remind you to carry out the action is… you. You are not super human, you will forget. This is a key distinction. Reading (or even scanning) a new email, means you now know what needs to be done with it, in order for it to be truly considered processed. Truly processing an email is to, for example, write an email of your own (e.g. to reply or forward), or take a non-email related action (e.g. create calendar entry, do something on some website), or read it carefully to gain some knowledge (e.g. it had a spec as an attachment), or keep it around as reference etc. 'Reading' means that you know what to do, not that you have done it. An email that is read is an email that is triaged, not an email that is resolved. Sometimes the thing that needs to be done based on receiving the email, you can (and want) to do immediately after reading the email. That is fine, you read the email and you processed it (typically when it takes no longer than X minutes, where X is your personal tolerance – mine is roughly 2 minutes). Other times, you decide that you don't want to spend X minutes at that moment, so after reading the email you need a quick system for "marking" the email as to be processed later (and you still leave it as 'read' in outlook). See later section for how. C. DO: Use Outlook rules and have multiple folders where incoming email is automatically moved to Outlook email rules are very powerful and easy to configure. Use them to automatically file email into folders. Here are mine (note that if a rule catches an email message then no further rules get processed): "personal" Email is either personal or business related. Almost all personal email goes to my gmail account. The personal emails that end up on my work email account, go to a dedicated folder – that is achieved via a rule that looks at the email's 'From' field. For those that slip through, I use the new Outlook 2010  quick step of "Conversation To Folder" feature to let the slippage only occur once per conversation, and then update my rules. "External" and "ViaBlog" The remaining external emails either come from my blog (rule on the subject line) or are unsolicited (rule on the domain name not being microsoft) and they are filed accordingly. "invites" I may do a separate blog post on calendar management, but suffice to say it should be kept up to date. All invite requests end up in this folder, so that even if mail gets out of control, the calendar can stay under control (only 1 folder to check). I.e. so I can let the organizer know why I won't be attending their meeting (or that I will be). Note: This folder is the only one that shows the total number of items in it, instead of the total unread. "Inbox" The only email that ends up here is email sent TO me and me only. Note that this is also the only email that shows up above the systray icon in the notification toast – all other emails cannot interrupt. "ToMe++" Email where I am on the TO line, but there are other recipients as well (on the TO or CC line). "CC" Email where I am on the CC line. I need to read these, but nobody is expecting a response or action from me so they are not as urgent (and if they are and follow up with me, they'll receive a link to this). "@ XYZ" Emails to aliases that are about projects that I directly work on (and I wasn't on the TO or CC line, of course). Test: these projects are in my commitments that I get measured on at the end of the year. "Z Mass" and subfolders under it per distribution list (DL) Emails to aliases that are about topics that I am interested in, but not that I formally own/contribute to. Test: if I unsubscribed from these aliases, nobody could rightfully complain. "Admin" folder, which resides under "Z Mass" folder Emails to aliases that I was added typically by an admin, e.g. broad emails to the floor/group/org/building/division/company that I am a member of. "BCC" folder, which resides under "Z Mass" Emails where I was not on the TO or the CC line explicitly and the alias it was sent to is not one I explicitly subscribed to (or I have been added to the BCC line, which I briefly touched on in another post). When there are only a few quick minutes to catch up on email, read as much as possible from these folders, in this order: Invites, Inbox, ToMe++. Only when these folders are all read (remember that doesn't mean that each email in them has been fully dealt with), we can move on to the @XYZ and then the CC folders. Only when those are read we can go on to the remaining folders. Note that the typical flow in the "Z Mass" subfolders is to scan subject lines and use the new Ctrl+Delete Outlook 2010 feature to ignore conversations. D. DO: Use Outlook Search folders in combination with categories As you process each folder, when you open a new email (i.e. click on it and read it in the preview pane) the email becomes read and stays read and you have to decide whether: It can take 2 minutes to deal with for good, right now, or It will take longer than 2 minutes, so it needs to be postponed with a clear next step, which is one of ToReply – there may be intermediate action steps, but ultimately someone else needs to receive email about this Action – no email is required, but I need to do something ReadLater – no email is required from the quick scan, but this is too long to fully read now, so it needs to be read it later WaitingFor – the email is informing of an intermediate status and 'promising' a future email update. Need to track. SomedayMaybe – interesting but not important, non-urgent, non-time-bound information. I may want to spend part of one of my weekends reading it. For all these 'next steps' use Outlook categories (right click on the email and assign category, or use shortcut key). Note that I also use category 'WaitingFor' for email that I send where I am expecting a response and need to track it. Create a new search folder for each category (I dragged the search folders into my favorites at the top left of Outlook, above my inboxes). So after the activity of reading/triaging email in the normal folders (where the email arrived) is done, the result is a bunch of emails appearing in the search folders (configure them to show the total items, not the total unread items). To actually process email (that takes more than 2 minutes to deal with) process the search folders, starting with ToReply and Action. E. DO: Get into a Routine Now you have a system in place, get into a routine of using it. Here is how I personally use mine, but this part I keep tweaking: Spend short bursts of time (between meetings, during boring but mandatory meetings and, in general, 2-4 times a day) aiming to have no unread emails (and in the process deal with some emails that take less than 2 minutes). Spend around 30 minutes at the end of each day processing most urgent items in search folders. Spend as long as it takes each Friday (or even the weekend) ensuring there is no unnecessary email baggage carried forward to the following week. F. Other resources Official Outlook help on: Create custom actions rules, Manage e-mail messages with rules, creating a search folder. Video on ignoring conversations (Ctrl+Del). Official blog post on Quick Steps and in particular the Move Conversation to folder. If you've read "Getting Things Done" it is very obvious that my approach to email management is driven by GTD. A very similar approach was described previously by ScottHa (also influenced by GTD), worth reading here. He also described how he sets up 2 outlook rules ('invites' and 'external') which I also use – worth reading that too. Comments about this post welcome at the original blog.

    Read the article

  • OpenGL extension vs OpenGL core

    - by user209347
    I was doubting: I'm writing a cross-platform engine OpenGL C++, I figured out windows forces the developers to access OpenGL features above 1.1 through extensions. Now the thing is, on Linux, I know that I can directly access functions if the version supports it through glext.h and opengl version. The problem is that if on Linux, the core doesn't support it, is it possible there is an extensions that supports the same functionality, in my case vertex buffer objects? I'm doing something like this: Windows: (hashdeck) define glFunction functionpointer_to_the_extension (apparently the layout changes font size if I use #) Linux: Since glext already defined glFunction, I can write in client code glFunction, and compile it both on Windows AND Linux without changing a single line in my client code using the engine (my goal). Now the thing is, I saw a tutorial use only the extension on Linux, and not checking for the opengl implementation version. If the functionality is available in the core, is it also available as extension (VBO's e.g.)? Or is an extension something you never know is available? I want to write an engine that gets all the possibilities on hardware, so I need to check (on Linux) for extensions as well as core version for possible functionality implementation.

    Read the article

  • Installing SharePoint 2010 and PowerPivot for SharePoint on Windows 7

    - by smisner
    Many people like me want (or need) to do their business intelligence development work on a laptop. As someone who frequently speaks at various events or teaches classes on all subjects related to the Microsoft business intelligence stack, I need a way to run multiple server products on my laptop with reasonable performance. Once upon a time, that requirement meant only that I had to load the current version of SQL Server and the client tools of choice. In today's post, I'll review my latest experience with trying to make the newly released Microsoft BI products work with a Windows 7 operating system.The entrance of Microsoft Office SharePoint Server 2007 into the BI stack complicated matters and I started using Virtual Server to establish a "suitable" environment. As part of the team that delivered a lot of education as part of the Yukon pre-launch activities (that would be SQL Server 2005 for the uninitiated), I was working with four - yes, four - virtual servers. That was a pretty brutal workload for a 2GB laptop, which worked if I was very, very careful. It could also be a finicky and unreliable configuration as I learned to my dismay at one TechEd session several years ago when I had to reboot a very carefully cached set of servers just minutes before my session started. Although it worked, it came back to life very, very slowly much to the displeasure of the audience. They couldn't possibly have been less pleased than me.At that moment, I resolved to get the beefiest environment I could afford and consolidate to a single virtual server. Enter the 4GB 64-bit laptop to preserve my sanity and my livelihood. Likewise, for SQL Server 2008, I managed to keep everything within a single virtual server and I could function reasonably well with this approach.Now we have SQL Server 2008 R2 plus Office SharePoint Server 2010. That means a 64-bit operating system. Period. That means no more Virtual Server. That means I must use Hyper-V or another alternative. I've heard alternatives exist, but my few dabbles in this area did not yield positive results. It might have been just me having issues rather than any failure of those technologies to adequately support the requirements.My first run at working with the new BI stack configuration was to set up a 64-bit 4GB laptop with a dual-boot to run Windows Server 2008 R2 with Hyper-V. However, I was generally not happy with running Windows Server 2008 R2 on my laptop. For one, I couldn't put it into sleep mode, which is helpful if I want to prepare for a presentation beforehand and then walk to the podium without the need to hold my laptop in its open state along the way (my strategy at the TechEd session long, long ago). Secondly, it was finicky with projectors. I had issues from time to time and while I always eventually got it to work, I didn't appreciate those nerve-wracking moments wondering whether this would be the time that it wouldn't work.Somewhere along the way, I learned that it was possible to load SharePoint 2010 in a Windows 7 which piqued my interest. I had just acquired a new laptop running Windows 7 64-bit, and thought surely running the BI stack natively on my laptop must be better than running Hyper-V. (I have not tried booting to Hyper-V VHD yet, but that's on my list of things to try so the jury of one is still out on this approach.) Recently, I had to build up a server with the RTM versions of SQL Server 2008 R2 and Sharepoint Server 2010 and decided to follow suit on my Windows 7 Ultimate 64-bit laptop. The process is slightly different, but I'm happy to report that it IS possible, although I had some fits and starts along the way.DISCLAIMER: These products are NOT intended to be run in production mode on the Windows 7 operating system. The configuration described in this post is strictly for development or learning purposes and not supported by Microsoft. If you have trouble, you will NOT get help from them. I might be able to help, but I provide no guarantees of my ability or availablity to help. I won't provide the step-by-step instructions in this post as there are other resources that provide these details, but I will provide an overview of my approach, point you to the relevant resources, describe some of the problems I encountered, and explain how I addressed those problems to achieve my desired goal.Because my goal was not simply to set up SharePoint Server 2010 on my laptop, but specifically PowerPivot for SharePoint, I started out by referring to the installation instructions at the PowerPiovt-Info site, but mainly to confirm that I was performing steps in the proper sequence. I didn't perform the steps in Part 1 because those steps are applicable only to a server operating system which I am not running on my laptop. Then, the instructions in Part 2, won't work exactly as written for the same reason. Instead, I followed the instructions on MSDN, Setting Up the Development Environment for SharePoint 2010 on Windows Vista, Windows 7, and Windows Server 2008. In general, I found the following differences in installation steps from the steps at PowerPivot-Info:You must copy the SharePoint installation media to the local drive so that you can edit the config.xml to allow installation on a Windows client.You also have to manually install the prerequisites. The instructions provides links to each item that you must manually install and provides a command-line instruction to execute which enables required Windows features.I will digress for a moment to save you some grief in the sequence of steps to perform. I discovered later that a missing step in the MSDN instructions is to install the November CTP Reporting Services add-in for SharePoint. When I went to test my SharePoint site (I believe I tested after I had a successful PowerPivot installation), I ran into the following error: Could not load file or assembly 'RSSharePointSoapProxy, Version=10.0.0.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91' or one of its dependencies. The system cannot find the file specified. I was rather surprised that Reporting Services was required. Then I found an article by Alan le Marquand, Working Together: SQL Server 2008 R2 Reporting Services Integration in SharePoint 2010,that instructed readers to install the November add-in. My first reaction was, "Really?!?" But I confirmed it in another TechNet article on hardware and software requirements for SharePoint Server 2010. It doesn't refer explicitly to the November CTP but following the link took me there. (Interestingly, I retested today and there's no longer any reference to the November CTP. Here's the link to download the latest and greatest Reporting Services Add-in for SharePoint Technologies 2010.) You don't need to download the add-in anymore if you're doing a regular server-based installation of SharePoint because it installs as part of the prerequisites automatically.When it was time to start the installation of SharePoint, I deviated from the MSDN instructions and from the PowerPivot-Info instructions:On the Choose the installation you want page of the installation wizard, I chose Server Farm.On the Server Type page, I chose Complete.At the end of the installation, I did not run the configuration wizard.Returning to the PowerPivot-Info instructions, I tried to follow the instructions in Part 3 which describe installing SQL Server 2008 R2 with the PowerPivot option. These instructions tell you to choose the New Server option on the Setup Role page where you add PowerPivot for SharePoint. However, I ran into problems with this approach and got installation errors at the end.It wasn't until much later as I was investigating an error that I encountered Dave Wickert's post that installing PowerPivot for SharePoint on Windows 7 is unsupported. Uh oh. But he did want to hear about it if anyone succeeded, so I decided to take the plunge. Perseverance paid off, and I can happily inform Dave that it does work so far. I haven't tested absolutely everything with PowerPivot for SharePoint but have successfully deployed a workbook and viewed the PowerPivot Management Dashboard. I have not yet tested the data refresh feature, but I have installed. Continue reading to see how I accomplished my objective.I unintalled SQL Server 2008 R2 and started again. I had different problems which I don't recollect now. However, I uninstalled again and approached installation from a different angle and my next attempt succeeded. The downside of this approach is that you must do all of the things yourself that are done automatically when you install PowerPivot as a new server. Here are the steps that I followed:Install SQL Server 2008 R2 to get a database engine instance installed.Run the SharePoint configuration wizard to set up the SharePoint databases.In Central Administration, create a Web application using classic mode authentication as per a TechNet article on PowerPivot Authentication and Authorization.Then I followed the steps I found at How to: Install PowerPivot for SharePoint on an Existing SharePoint Server. Especially important to note - you must launch setup by using Run as administrator. I did not have to manually deploy the PowerPivot solution as the instructions specify, but it's good to know about this step because it tells you where to look in Central Administration to confirm a successful deployment.I did spot some incorrect steps in the instructions (at the time of this writing) in How To: Configure Stored Credentials for PowerPivot Data Refresh. Specifically, in the section entitled Step 1: Create a target application and set the credentials, both steps 10 and 12 are incorrect. They tell you to provide an actual Windows user name and password on the page where you are simply defining the prompts for your application in the Secure Store Service. To add the Windows user name and password that you want to associate with the application - after you have successfully created the target application - you select the target application and then click Set credentials in the ribbon.Lastly, I followed the instructions at How to: Install Office Data Connectivity Components on a PowerPivot server. However, I have yet to test this in my current environment.I did have several stops and starts throughout this process and edited those out to spare you from reading non-essential information. I believe the explanation I have provided here accurately reflect the steps I followed to produce a working configuration. If you follow these steps and get a different result, please let me know so that together we can work through the issue and correct these instructions. I'm sure there are many other folks in the Microsoft BI community that will appreciate the ability to set up the BI stack in a Windows 7 environment for development or learning purposes. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

< Previous Page | 309 310 311 312 313 314 315 316 317 318 319 320  | Next Page >