Search Results

Search found 24401 results on 977 pages for 'duplicate content'.

Page 718/977 | < Previous Page | 714 715 716 717 718 719 720 721 722 723 724 725  | Next Page >

  • z-index has no effect in IE7 with Google Map and Navigation Sub-Menus

    - by bhamrick
    I feel like the problem is extremely apparent. I'm working on an issue with a client's site, which actually happens on several of my clients' sites but this one is the most apparent. IE7 Is refusing to obey z-index rules. I've played around with differing values, particularly on the divs #mapWrapper and #map. Take a look here: http://thepaysongroup.com/wp-content/plugins/hq_idx/searchlistings.php I've done dozens of web searches and I can't find anything that resolves this issue. I also ready through Aleksandar Vacic's article on IE6/7 z-index discrepancies, but still nothing. Any assistance would be much appreciated, I'm tearing my hair out on this one.

    Read the article

  • ASP.NET Repeater Causing JQuery Image Slider

    - by Bry4n
    I have a jquery Image slider in a content page that worked fine. Once I converted it into a asp repeater the first image of the repeater would display twice, then run normally. Any idea on why the repeater is causing this? I think I discovered that the first image link <ItemTemplate> <a href='<%#Eval("Url")%>'> <img src='<%#Eval("Image")%>' alt="Spring Break 2011" rel='<h3><%#Eval("Title")%></h3><%#Eval("Caption")%>'/></a> </ItemTemplate> I have to place class="show" in the first item only. Does anyone know how to implement this during the first time it goes through. Hmm

    Read the article

  • onclick form submit, open jQuery loading image until form submit complete

    - by Jackson
    Hi Team, Can you offer a bit of advice. I am using a hosted SAAS CMS solution that enables you to create basing apps with a web apps system. I have created a form for members to submit a bunch of images and content to their own area. Everything is working great except if the images being submitted via the form are large, it takes ages for the form to submit and go to the thank you page. I am already using jQuery UI to split the form in to 5 steps and using the jQuery facebox plugin for instructional popups. My question is, what would be the best way to display a loading gif (in facebox or in another suggested overlay) while the form is being submitted? Thanks for your help! Jack

    Read the article

  • Handling credit cards and IOS

    - by Susan Jackie
    I am using NSUrlConnection asyncronous request to transmit credit card information to a secure third party server. I do the following: I get the credit card number, cvv, etc from the uitextfields. Encode the credit card information into a json format. Set as httpd body of the nsurlconnection request as follows: NSURL * url = [[NSURL URLWithString: "https://www.example.com"]; NSMutableURLRequest * request = [[NSMutableURLRequest alloc] initWithURL: url]; [request setHTTPMethod: @"POST"]; [request setValue:@"application/json" forHTTPHeaderField:@"Accept"]; [request setValue:@"application/json" forHTTPHeaderField:@"Content-Type"]; [request setHTTPBody: [NSJSONSerialization dataWithJSONObject: params options: kNilOptions error: &parseError]]; Send this information via asynchronous request to a secure third party server: [NSURLConnection sendAsynchronousRequest:request queue: queue completionHandler:^(NSURLResponse *response, NSData *data, NSError * requestError) { What should I be considering to send user credit card information to a third party server using nsurlconnection asynchronous request?

    Read the article

  • Postgresql 8.4 reading OID style BLOBs with Hibernate

    - by peter
    I am getting this weird case when querying Postgres 8.4 for some records with Blobs (of type OIDs) with Hibernate. The query does return all right but when my code wants to read the content of the BLOB with the simple code below, it gets 0 bytes back public static byte[] readBlob(Blob blob) throws Exception { InputStream is = null; try { is = blob.getBinaryStream(); return org.apache.commons.io.IOUtils.toByteArray(is); } finally { if (is != null) try { is.close(); } catch(Exception e) {} } } Funny think is that I am getting this behavior only since I've started adding more then one such records to the table. The underlying JDBC library is type 3 (postgresq 8.4-701). Can someone give me a hint as to how to solve this issue? Thanks Peter

    Read the article

  • ASP.NET variable scope

    - by c11ada
    hey all, im really new to ASP.Net and still havnt got my head round most of the concepts, im really having problems with variable, when creating a desktop application, any variable created in code for example int n = 10; this variable will be there until the program is running right. but in asp iv created variables which are supposed to last until the user gets of that page, so for example say i have a page which takes the users value ( call this variable n say the user type in 10) and every time the user presses the next button the code add 10 to this value. so the first time the user presses the next button n= n + 10; what i have found is no matter how many times i press the next button n will always equal 20, because the previous value is not saved !! these values are populated the first time the user enters that page, once the user clicks the next button the content of these values disappear! how can stop this from happening ?? hope this makes sense !! thanks

    Read the article

  • loading and formating the rtf file in uiwebview

    - by Jayshree
    Hello everybody. I am trying to load the contents of a rtf file in the uiwebview. I am successful in loading the contents, but it is unformatted. the fonts are too large and it doesnt have any format. it displays the color of the rtf content, but not the font or size. So what should i do now. is there anyother way to load the rtf and format it???? I load the rtf in following way: NSString *path = [[NSBundle mainBundle] pathForResource:@"Home" ofType:@"rtf"]; NSURL *url=[NSURL fileURLWithPath:path]; NSURLRequest *req=[NSURLRequest requestWithURL:url]; [menuWeb loadRequest:req]; So what should i do now. can anybody help me????

    Read the article

  • Liquid templates - accessing members by name

    - by egarcia
    I'm using Jekyll to create a new blog. It uses Liquid underneath. Jekyll defines certain "variables": site, content, page, post and paginator. These "variables" have several "members". For instance, post.date will return the date of a post, while post.url will return its url. My question is: can I access a variable's member using another variable as the member name? See the following example: {% if my_condition %} {% assign name = 'date' %} {% else %} {% assign name = 'url' %} {% endif %} I have a variable called name which is either 'date' or 'url'. How can I make the liquid equivalent of post[name] in ruby? The only way I've found is using a for loop to iterate over all the pairs (key-value) of post. Beware! It is quite horrible: {% for property in post %} {% if property[0] == name %} {{ property[1] }} {% endif %} {% endfor %} Argh! I hope there is a better way. Thanks.

    Read the article

  • Prevent Ruby on Rails from sending the session header

    - by hurikhan77
    How do I prevent Rails from always sending the session header (Set-Cookie). This is a security problem if the application also sends the Cache-Control: public header. My application touches (but does not modify) the session hash in some/most actions. These pages display no private content so I want them to be cacheable - but Rails always sends the cookie header, no matter if the sent session hash is different from the previous or not. What I want to achieve is to only send the hash if it is different from the one received from the client. How can you do that? And probably that fix should also go into official Rails release? What do you think?

    Read the article

  • Using jQuery for client side validation in MVC2 RTM

    - by tigermain
    Scott Gu's tutorial on Model validation gets us all set up with the MS client side validation using the following scripts: <script src="../../Scripts/jquery.validate.min.js" type="text/javascript"></script> <script src="../../Scripts/MicrosoftMvcValidation.js" type="text/javascript"></script> However I've seen various posts allowing us to utilise jQuery instead with the following code: <script src="https://ajax.microsoft.com/ajax/jquery/jquery-1.4.2.min.js" type="text/javascript"></script> <script src="https://ajax.microsoft.com/ajax/jQuery.Validate/1.6/jQuery.Validate.min.js" type="text/javascript"></script> <script src="<%= Url.Content("~/scripts/MicrosoftMvcJQueryValidation.js") %>" type="text/javascript"></script> However MicrosoftMvcJQueryValidation.js does not ship with the solution and from what I read it should be part of the Futures pack which is no longer available on CodePlex. I managed to find a version alongside jQuery 1.3.2 but it does not work. What is the forward going solution!?

    Read the article

  • designing an API wrapper for Twitter, Facebook, Youtube etc...

    - by John Stewart
    I am looking at some pointers on how to design a wrapper for these social networking sites. Ideally what I want to do is create a black box where I am able to create an interface for other libraries to call certain functions to interact with these social networking sites. I am planning on using oAuth for most of these sites, I already have this layer designed in PHP. The other layer that I need for these social sites is the ability to push and pull content. For example, the ability to pull feeds for users from each of these networks and then should I cache them on my end? how would I cache all twitter, facebook etc activity feed and be able to account for resync etc? The networks that I am looking at are: Twitter Youtube Facebook LinkedIN Vimeo Flickr I am looking for ideas on how to tackle this in php? Any suggestions, opensource systems that I can learn from?

    Read the article

  • Strategy in exporting to Excel with formatting from ASP.NET?

    - by Jiho Han
    So this is another exporting to Excel question. I have a page that has a table with formatting by stylesheet. When I export the page by setting the ContentType to application/excel and Content-Disposition to attachment, I can export the table to Excel (not CSV). However, it loses all formatting. I think it's because Excel does not load CSS and I guess that's reasonable. So, in a scenario where I have to show the table on the web and also export to Excel, both with similar (even if not exact) formatting, what would be the best approach without using something like NPOI? I am trying to minimize the work and keep the single template if possible. Is it necessary for me to create two separate templates: one with stylesheet, the other with embedded style in the table itself for Excel? Having a single template with conditional formatting inside would be very messy. Any ideas?

    Read the article

  • What's New In 11.1.2.1 (Talleyrand SP1)

    - by russ.bishop
    This release is primarily about bug fixes and that's what we spent the most time on, but we also addressed a number of other things: 1. Performance improvements We've done a lot of work to improve the performance of page load and execution times. For example, the View Compare page is about half the size it was previously! We've also done a lot of work on the server to improve performance of queries, exports, action scripts, etc. We implemented some finer-grained locking so fewer operations will block other users while they are in progress. We made some optimizations to improve performance when you have a lot of network or database latency as well. Just a few examples: An Import that previously took 8 GB of memory and hours to complete now runs in about 30 minutes and never takes more than 1 GB of RAM. Searching by exact Node Name now completes within 2 seconds even for a hierarchy with millions of nodes. Another search that was taking 30 seconds to run now completes in less than 5 seconds. 2. Upgrade support This release supports automatic upgrade from previous releases, built right into the console. 3. Console Improvements The Console has been reorganized and made easier to use. It is also much more multi-threaded so it responds quicker without freezing up when you save changes or when it needs to get status. 4. Property Namespaces Properties now have a concept called a Namespace. This is tied into the Application Templates to prevent conflicts with duplicate property names. Right now, if you have an AccountType and you pull in the HFM template, it also has AccountType so you end up creating properties with decorations on the name like "Account Type (HFM)". This is no longer necessary. In addition, properties within a namespace must have unique labels but they can be duplicated across namespaces. So in the Property Grid when you click on the HFM category, you just see "AccountType". When you click on MyCategory, you see "AccountType", but they are different properties with different values. Within formulas, the names are still unique (eg: Custom.AccountType vs HFM.AccountType). I'll write more about this one later. 5. Single Sign On DRM now supports Single Sign-On via HSS. For example, if you are using Oracle's OAM as your SSO solution then you configure HSS to use OAM just like you would before. You also configure DRM to use HSS, again just like before. Then you configure OAM to protect the DRM web app, like you would any other website. However once you do those things, users are no longer prompted to enter their username/password. They simply get redirected to OAM if they don't already have a login token, otherwise they pick their application and sail right into DRM. You can also avoid having to pick an application (see the next item) 6. URL-based navigation You can now specify the application you want to log into via the URL. Combined with SSO and your Intranet, it becomes easy to provide links on our intranet portal that take users directly into a specific DRM application. We also support specifying the Version, Hierarchy, and Node. Again, this can be used on your internal portal, but the scenarios get even more interesting when you are using workflow like Oracle BPEL you can automatically generate links within emails that will take users directly to a specific node in the UI. 7. Job status and cancellation A lot of the jobs now report their status and support true cancellation. Action Scripts also report a progress complete percentage since the amount of work is known ahead of time. 8. Action Script Options Action scripts support Option declarations at the top of the file so a script can self-describe (when specified in the file, the corresponding item in the file is ignored). For example: Option|DetectDelimiter Option|UsePropertyNames|true This will tell DRM to automatically detect the delimiter (a pipe symbol in this case) and that all references to properties are by Name, not by Label. Note that when you load a script in the UI, if you use Labels we automatically try to match them up if they are unique. Any duplicates are indicated and you are presented with a choice to pick which property you actually referred to. This is somewhat similar to Version substitution, but tailored for properties. There are other more minor changes and like I said earlier a lot of bug fixes and performance improvements. Hopefully I will get a chance to dig into some of these things in future blog posts.

    Read the article

  • Creating a QuerySet based on a ManyToManyField in Django

    - by River Tam
    So I've got two classes; Picture and Tag that are as follows: class Tag(models.Model): pics = models.ManyToManyField('Picture', blank=True) name = models.CharField(max_length=30) # stuff omitted class Picture(models.Model): name = models.CharField(max_length=100) pub_date = models.DateTimeField('date published') tags = models.ManyToManyField('Tag', blank=True) content = models.ImageField(upload_to='instaton') #stuff omitted And what I'd like to do is get a queryset (for a ListView) given a tag name that contains the most recent X number of Pictures that are tagged as such. I've looked up very similar problems, but none of the responses make any sense to me at all. How would I go about creating this queryset?

    Read the article

  • Custom view in ListView disappears while scrolling in Android

    - by troorl
    Hi. I wrote a simple custom view for rows in ListView. It works fine, but when I try to scroll the list, I see only white rows without content. And when scrolling is over, rows continue to show again. This is how I draw a custom view: @Override protected void onDraw(Canvas canvas) { if(poster != null) { canvas.drawBitmap(poster, padding, padding, paint); } canvas.drawText(title, 55 + padding, 16 + padding, paint); } Maybe I missed something? P.S. This is short screencast (400kb) http://dl.dropbox.com/u/190203/test.mpeg

    Read the article

  • PHP HttpRequest Cookie Issue

    - by Daaron
    I have a chunk of code to login to test a web site login: $r = new HttpRequest($newlocation, HttpRequest::METH_GET); $r-addCookies($cookieArray); $r-send(); The content of $cookieArray is from a redirect, but I don't modify it in any way. The really baked part is that if the value of the cookie (an authentication token string) contains a slash, it doesn't login properly. If it doesn't have a slash, everything works. Any ideas are appreciated.

    Read the article

  • Which is the better approach in custom pages ?

    - by Mina Samy
    Hi all I want to create a custom new item page for sharepoint but there are two approached that I can use and I want to share your experience in determining which is better. The first: is to create a page in a library then create a C# library project to handle the events of the controls on the page. The second: is to define a feature of the content type of my list and specify the new item form to be my custom form, then create a website containing the custom form and put this site at the layouts folder. for me the first approach is fine but the problem is that a user may access the default sharepoint new item form which I don't want to happen. but I don't like the idea of placing the form in a library on the site. so which is better in your opinion ? thanks

    Read the article

  • Create Zip file from stream and download it

    - by Navid Farhadi
    I have a DataTable that i want to convert it to xml and then zip it, using DotNetZip. finally user can download it via Asp.Net webpage. My code in below dt.TableName = "Declaration"; MemoryStream stream = new MemoryStream(); dt.WriteXml(stream); ZipFile zipFile = new ZipFile(); zipFile.AddEntry("Report.xml", "", stream); Response.ClearContent(); Response.ClearHeaders(); Response.AppendHeader("content-disposition", "attachment; filename=Report.zip"); zipFile.Save(Response.OutputStream); //Response.Write(zipstream); zipFile.Dispose(); the xml file in zip file is empty.

    Read the article

  • Textarea to paragraphs

    - by zaf
    When I have to render textarea content to the front end I usually pass it thru a function that converts newlines to <br/> tags and double newlines signal paragraph tags so blocks of text get surrounded by <p> and </p> tags. To save time I usually use a ready made PHP function from the wordpress codebase. You can get the link from the man himself: http://ma.tt/scripts/autop/ If you check it out you'll see it does some heavy lifting with about 20 regular expressions. I know I could use a wysiwyg editor (like TinyMCE or CKEditor) that can format the data on the client and then send it to the server (most of them add <p>..</p> tags by default) but I want to know the experience of others in handling raw textarea input and then displaying it on the front end.

    Read the article

  • SQLAuthority News – Great Time Spent at Great Indian Developers Summit 2014

    - by Pinal Dave
    The Great Indian Developer Conference (GIDS) is one of the most popular annual event held in Bangalore. This year GIDS is scheduled on April 22, 25. I will be presented total four sessions at this event and each session is very different from each other. Here are the details of four of my sessions, which I presented there. Pluralsight Shades This event was a great event and I had fantastic fun presenting a technology over here. I was indeed very excited that along with me, I had many of my friends presenting at the event as well. I want to thank all of you to attend my session and having standing room every single time. I have already sent resources in my newsletter. You can sign up for the newsletter over here. Indexing is an Art I was amazed with the crowd present in the sessions at GIDS. There was a great interest in the subject of SQL Server and Performance Tuning. Audience at GIDS I believe event like such provides a great platform to meet and share knowledge. Pinal at Pluralsight Booth Here are the abstract of the sessions which I had presented. They were recorded so at some point in time they will be available, but if you want the content of all the courses immediately, I suggest you check out my video courses on the same subject on Pluralsight. Indexes, the Unsung Hero Relevant Pluralsight Course Slow Running Queries are the most common problem that developers face while working with SQL Server. While it is easy to blame SQL Server for unsatisfactory performance, the issue often persists with the way queries have been written, and how Indexes has been set up. The session will focus on the ways of identifying problems that slow down SQL Server, and Indexing tricks to fix them. Developers will walk out with scripts and knowledge that can be applied to their servers, immediately post the session. Indexes are the most crucial objects of the database. They are the first stop for any DBA and Developer when it is about performance tuning. There is a good side as well evil side to indexes. To master the art of performance tuning one has to understand the fundamentals of indexes and the best practices associated with the same. We will cover various aspects of Indexing such as Duplicate Index, Redundant Index, Missing Index as well as best practices around Indexes. SQL Server Performance Troubleshooting: Ancient Problems and Modern Solutions Relevant Pluralsight Course Many believe Performance Tuning and Troubleshooting is an art which has been lost in time. However, truth is that art has evolved with time and there are more tools and techniques to overcome ancient troublesome scenarios. There are three major resources that when bottlenecked creates performance problems: CPU, IO, and Memory. In this session we will focus on High CPU scenarios detection and their resolutions. If time permits we will cover other performance related tips and tricks. At the end of this session, attendees will have a clear idea as well as action items regarding what to do when facing any of the above resource intensive scenarios. Developers will walk out with scripts and knowledge that can be applied to their servers, immediately post the session. To master the art of performance tuning one has to understand the fundamentals of performance, tuning and the best practices associated with the same. We will discuss about performance tuning in this session with the help of Demos. Pinal Dave at GIDS MySQL Performance Tuning – Unexplored Territory Relevant Pluralsight Course Performance is one of the most essential aspects of any application. Everyone wants their server to perform optimally and at the best efficiency. However, not many people talk about MySQL and Performance Tuning as it is an extremely unexplored territory. In this session, we will talk about how we can tune MySQL Performance. We will also try and cover other performance related tips and tricks. At the end of this session, attendees will not only have a clear idea, but also carry home action items regarding what to do when facing any of the above resource intensive scenarios. Developers will walk out with scripts and knowledge that can be applied to their servers, immediately post the session. To master the art of performance tuning one has to understand the fundamentals of performance, tuning and the best practices associated with the same. You will also witness some impressive performance tuning demos in this session. Hidden Secrets and Gems of SQL Server We Bet You Never Knew Relevant Pluralsight Course SQL Trio Session! It really amazes us every time when someone says SQL Server is an easy tool to handle and work with. Microsoft has done an amazing work in making working with complex relational database a breeze for developers and administrators alike. Though it looks like child’s play for some, the realities are far away from this notion. The basics and fundamentals though are simple and uniform across databases, the behavior and understanding the nuts and bolts of SQL Server is something we need to master over a period of time. With a collective experience of more than 30+ years amongst the speakers on databases, we will try to take a unique tour of various aspects of SQL Server and bring to you life lessons learnt from working with SQL Server. We will share some of the trade secrets of performance, configuration, new features, tuning, behaviors, T-SQL practices, common pitfalls, productivity tips on tools and more. This is a highly demo filled session for practical use if you are a SQL Server developer or an Administrator. The speakers will be able to stump you and give you answers on almost everything inside the Relational database called SQL Server. I personally attended the session of Vinod Kumar, Balmukund Lakhani, Abhishek Kumar and my favorite Govind Kanshi. Summary If you have missed this event here are two action items 1) Sign up for Resource Newsletter 2) Watch my video courses on Pluralsight Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: MySQL, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Author Visit, SQLAuthority News, T SQL Tagged: GIDS

    Read the article

  • TWiki: Restricting users scope of editing the pages.

    - by abhishekgupta92
    I am making a twiki website and I am stuck with a problem. The users have access to write the files. So, I want to include Report Abuse option so that in case of vandalism of someone the thing can be corrected as soon as possible. Also, I am facing one more problem that someone told me that while editing the code the person can change the read/write permissions for the other users. And obviously, that wouldn't be something that I am expecting from Twiki. C'mon there must be some way of moderation in twiki to ensure thatthere is no undesirable content.

    Read the article

  • Laissez les bon temps rouler! (Microsoft BI Conference 2010)

    - by smisner
    Laissez les bons temps rouler" is a Cajun phrase that I heard frequently when I lived in New Orleans in the mid-1990s. It means "Let the good times roll!" and encapsulates a feeling of happy expectation. As I met with many of my peers and new acquaintances at the Microsoft BI Conference last week, this phrase kept running through my mind as people spoke about their plans in their respective businesses, the benefits and opportunities that the recent releases in the BI stack are providing, and their expectations about the future of the BI stack.Notwithstanding some jabs here and there to point out the platform is neither perfect now nor will be anytime soon (along with admissions that the competitors are also not perfect), and notwithstanding several missteps by the event organizers (which I don't care to enumerate), the overarching mood at the conference was positive. It was a refreshing change from the doom and gloom hovering over several conferences that I attended in 2009. Although many people expect economic hardships to continue over the coming year or so, everyone I know in the BI field is busier than ever and expects to stay busy for quite a while.Self-Service BISelf-service was definitely a theme of the BI conference. In the keynote, Ted Kummert opened with a look back to a fairy tale vision of self-service BI that he told in 2008. At that time, the fairy tale future was a time when "every end user was able to use BI technologies within their job in order to move forward more effectively" and transitioned to the present time in which SQL Server 2008 R2, Office 2010, and SharePoint 2010 are available to deliver managed self-service BI.This set of technologies is presumably poised to address the needs of the 80% of users that Kummert said do not use BI today. He proceeded to outline a series of activities that users ought to be able to do themselves--from simple changes to a report like formatting or an addtional data visualization to integration of an additional data source. The keynote then continued with a series of demonstrations of both current and future technology in support of self-service BI. Some highlights that interested me:PowerPivot, of course, is the flagship product for self-service BI in the Microsoft BI stack. In the TechEd keynote, which was open to the BI conference attendees, Amir Netz (twitter) impressed the audience by demonstrating interactivity with a workbook containing 100 million rows. He upped the ante at the BI keynote with his demonstration of a future-state PowerPivot workbook containing over 2 billion records. It's important to note that this volume of data is being processed by a server engine, and not in the PowerPivot client engine. (Yes, I think it's impressive, but none of my clients are typically wrangling with 2 billion records at a time. Maybe they're thinking too small. This ability to work quickly with large data sets has greater implications for BI solutions than for self-service BI, in my opinion.)Amir also demonstrated KPIs for the future PowerPivot, which appeared to be easier to implement than in any other Microsoft product that supports KPIs, apart from simple KPIs in SharePoint. (My initial reaction is that we have one more place to build KPIs. Great. It's confusing enough. I haven't seen how well those KPIs integrate with other BI tools, which will be important for adoption.)One more PowerPivot feature that Amir showed was a graphical display of the lineage for calculations. (This is hugely practical, especially if you build up calculations incrementally. You can more easily follow the logic from calculation to calculation. Furthermore, if you need to make a change to one calculation, you can assess the impact on other calculations.)Another product demonstration will be available within the next 30 days--Pivot for Reporting Services. If you haven't seen this technology yet, check it out at www.getpivot.com. (It definitely has a wow factor, but I'm skeptical about its practicality. However, I'm looking forward to trying it out with data that I understand.)Michael Tejedor (twitter) demonstrated a feature that I think is really interesting and not emphasized nearly enough--overshadowed by PowerPivot, no doubt. That feature is the Microsoft Business Intelligence Indexing Connector, which enables search of the content of Excel workbooks and Reporting Services reports. (This capability existed in MOSS 2007, but was more cumbersome to implement. The search results in SharePoint 2010 are not only cooler, but more useful by describing whether the content is found in a table or a chart, for example.)This may yet be the dawning of the age of self-service BI - a phrase I've heard repeated from time to time over the last decade - but I think BI professionals are likely to stay busy for a long while, and need not start looking for a new line of work. Kummert repeatedly referenced strategic BI solutions in contrast to self-service BI to emphasize that self-service BI is not a replacement for the services that BI professionals provide. After all, self-service BI does not appear magically on user desktops (or whatever device they want to use). A supporting infrastructure is necessary, and grows in complexity in proportion to the need to simplify BI for users.It's one thing to hear the party line touted by Microsoft employees at the BI keynote, but it's another to hear from the people who are responsible for implementing and supporting it within an organization. Rob Collie (blog | twitter), Kasper de Jonge (blog | twitter), Vidas Matelis (site | twitter), and I were invited to join Andrew Brust (blog | twitter) as he led a Birds of a Feather session at TechEd entitled "PowerPivot: Is It the BI Deal-Changer for Developers and IT Pros?" I would single out the prevailing concern in this session as the issue of control. On one side of this issue were those who were concerned that they would lose control once PowerPivot is implemented. On the other side were those who believed that data should be freely accessible to users in PowerPivot, and even acknowledgment that users would get the data they want even if it meant they would have to manually enter into a workbook to have it ready for analysis. For another viewpoint on how PowerPivot played out at the conference, see Rob Collie's observations.Collaborative BII have been intrigued by the notion of collaborative BI for a very long time. Before I discovered BI, I was a Lotus Notes developer and later a manager of developers, working in a software company that enabled collaboration in the legal industry. Not only did I help create collaborative systems for our clients, I created a complete project management from the ground up to collaboratively manage our custom development work. In that case, collaboration involved my team, my client contacts, and me. I was also able to produce my own BI from that system as well, but didn't know that's what I was doing at the time. Only in recent years has SharePoint begun to catch up with the capabilities that I had with Lotus Notes more than a decade ago. Eventually, I had the opportunity at that job to formally investigate BI as another product offering for our software, and the rest - as they say - is history. I built my first data warehouse with Scott Cameron (who has also ventured into the authoring world by writing Analysis Services 2008 Step by Step and was at the BI Conference last week where I got to reminisce with him for a bit) and that began a career that I never imagined at the time.Fast forward to 2010, and I'm still lauding the virtues of collaborative BI, if only the tools will catch up to my vision! Thus, I was anxious to see what Donald Farmer (blog | twitter) and Rita Sallam of Gartner had to say on the subject in their session "Collaborative Decision Making." As I suspected, the tools aren't quite there yet, but the vendors are moving in the right direction. One thing I liked about this session was a non-Microsoft perspective of the state of the industry with regard to collaborative BI. In addition, this session included a better demonstration of SharePoint collaborative BI capabilities than appeared in the BI keynote. Check out the video in the link to the session to see the demonstration. One of the use cases that was demonstrated was linking from information to a person, because, as Donald put it, "People don't trust data, they trust people."The Microsoft BI Stack in GeneralA question I hear all the time from students when I'm teaching is how to know what tools to use when there is overlap between products in the BI stack. I've never taken the time to codify my thoughts on the subject, but saw that my friend Dan Bulos provided good insight on this topic from a variety of perspectives in his session, "So Many BI Tools, So Little Time." I thought one of his best points was that ideally you should be able to design in your tool of choice, and then deploy to your tool of choice. Unfortunately, the ideal is yet to become real across the platform. The closest we come is with the RDL in Reporting Services which can be produced from two different tools (Report Builder or Business Intelligence Development Studio's Report Designer), manually, or by a third-party or custom application. I have touted the idea for years (and publicly said so about 5 years ago) that eventually more products would be RDL producers or consumers, but we aren't there yet. Maybe in another 5 years.Another interesting session that covered the BI stack against a backdrop of competitive products was delivered by Andrew Brust. Andrew did a marvelous job of consolidating a lot of information in a way that clearly communicated how various vendors' offerings compared to the Microsoft BI stack. He also made a particularly compelling argument about how the existence of an ecosystem around the Microsoft BI stack provided innovation and opportunities lacking for other vendors. Check out his presentation, "How Does the Microsoft BI Stack...Stack Up?"Expo HallI had planned to spend more time in the Expo Hall to see who was doing new things with the BI stack, but didn't manage to get very far. Each time I set out on an exploratory mission, I got caught up in some fascinating conversations with one or more of my peers. I find interacting with people that I meet at conferences just as important as attending sessions to learn something new. There were a couple of items that really caught me eye, however, that I'll share here.Pragmatic Works. Whether you develop SSIS packages, build SSAS cubes, or author SSRS reports (or all of the above), you really must take a look at BI Documenter. Brian Knight (twitter) walked me through the key features, and I must say I was impressed. Once you've seen what this product can do, you won't want to document your BI projects any other way. You can download a free single-user database edition, or choose from more feature-rich standard or professional editions.Microsoft Press ebooks. I also stopped by the O'Reilly Media booth to meet some folks that one of my acquisitions editors at Microsoft Press recommended. In case you haven't heard, Microsoft Press has partnered with O'Reilly Media for distribution and publishing. Apart from my interest in learning more about O'Reilly Media as an author, an advertisement in their booth caught me eye which I think is a really great move. When you buy Microsoft Press ebooks through the O'Reilly web site, you can receive it in any (or all) of the following formats where possible: PDF, epub, .mobi for Kindle and .apk for Android. You also have lifetime DRM-free access to the ebooks. As someone who is an avid collector of books, I fnd myself running out of room for storage. In addition, I travel a lot, and it's hard to lug my reference library with me. Today's e-reader options make the move to digital books a more viable way to grow my library. Having a variety of formats means I am not limited to a single device, and lifetime access means I don't have to worry about keeping track of where I've stored my files. Because the e-books are DRM-free, I can copy and paste when I'm compiling notes, and I can print pages when necessary. That's a winning combination in my mind!Overall, I was pleased with the BI conference. There were many more sessions that I couldn't attend, either because the room was full when I got there or there were multiple sessions running concurrently that I wanted to see. Fortunately, many of the sessions are accessible for viewing online at http://www.msteched.com/2010/NorthAmerica along with the TechEd sessions. You can spot the BI sessions by the yellow skyline on the title slide of the presentation as shown below. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • in php, how to send the html tag to mail

    - by zahir hussain
    <script type="text/javascript"> var head= 23; </script> <?php $h="<script language='javascript'> document.write(head);</script>"; $headers = "MIME-Version: 1.0" . "\r\n"; $headers .= "Content-type:text/html;charset=ISO-8859-1" . "\r\n"; $mail_from='[email protected]'; $name="husssain"; $headers.="From: $name <$mail_from>"; $con="Welcome to world"; $to ="[email protected]"; $send_contact = mail($to,$con,$h,$headers); ?> then i send the $h to my mail id but i recieve only this <script language='javascript'> document.write(head);</script>

    Read the article

  • Is it better to have client (Javascript) processing HTML rather than C# processing HTML?

    - by Raja
    We are in the process of building a huge site. We are contemplating on whether to do the processing of HTML at server side (ASP .Net) or at the client side. For example we have HTML files which acts like templates for the generation of tabs. Is it better for the server side to get hold of content section (div) of HTML load the appropriate values and send the updated HTML to the browser or is it better that a chunk of data is passed onto client and make Javascript do the work? Any justification with respect to either ways will be helpful. Thanks.

    Read the article

  • WPF Statusbar Updates - help, I seem to be going round in circles

    - by David Ward
    I seem to be going round in circles. I have a WPF application that has a main ribbon window with a status bar. When you navigate to a "view" a user control is displayed as the content of the main window. The view has a ViewModel which handles retrieving data from the database and the View's datacontext is set to the ViewModel. What I want is to have the lengthy operation (data retrieval) run on a background thread and whilst it is running the status in the main window to report appropriately. When the background task is complete, the status should revert back to "Ready" (much the same as Visual Studio). How should I wire this together so that I can have the data access code separated out in the ViewModel whilst keeping a responsive UI? I have tried using the BackgroundWorker is various places in the code and I still end up with an unresponsive UI.

    Read the article

< Previous Page | 714 715 716 717 718 719 720 721 722 723 724 725  | Next Page >