Search Results

Search found 35433 results on 1418 pages for 'document based'.

Page 339/1418 | < Previous Page | 335 336 337 338 339 340 341 342 343 344 345 346  | Next Page >

  • Ad-hoc String Manipulation With Visual Studio

    - by Liam McLennan
    Visual studio supports relatively advanced string manipulation via the ‘Quick Replace’ dialog. Today I had a requirement to modify some html, replacing line breaks with unordered list items. For example, I need to convert: Infrastructure<br/> Energy<br/> Industrial development<br/> Urban growth<br/> Water<br/> Food security<br/> to: <li>Infrastructure</li> <li>Energy</li> <li>Industrial development</li> <li>Urban growth</li> <li>Water</li> <li>Food security</li> This cannot be done with a simple search-and-replace but it can be done using the Quick Replace regular expression support. To use regular expressions expand ‘Find Options’, check ‘Use:’ and select ‘Regular Expressions’ Typically, Visual Studio regular expressions use a different syntax to every other regular expression engine. We need to use a capturing group to grab the text of each line so that it can be included in the replacement. The syntax for a capturing group is to replace the part of the expression to be captured with { and }. So my regular expression: {.*}\<br/\> means capture all the characters before <br/>. Note that < and > have to be escaped with \. In the replacement expression we can use \1 to insert the previously captured text. If the search expression had a second capturing group then its text would be available in \2 and so on. Visual Studio’s quick replace feature can be scoped to a selection, the current document, all open documents or every document in the current solution.

    Read the article

  • Considering Embedding a Database? Choose MySQL!

    - by Bertrand Matthelié
    The M of the LAMP stack and the #1 database for Web-based applications, MySQL is also an extremely popular choice as embedded database. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Access our Resource Kit to discover the top reasons why:   3,000 ISVs and OEMs rely on MySQL as their embedded database 8 of the top 10 software vendors and hundreds of startups selected MySQL to power their cloud, on-premise and appliance-based offerings Leading mobile and SaaS providers ensure continuous service availability and scalability with lower cost and risk using MySQL Cluster. Learn how you can reduce costs and accelerate time to market while increasing performance and reliability. Access white papers, webinars, case studies and other resources in our Resource Kit.  

    Read the article

  • Extending AutoVue Through the API

    - by GrahamOracle
    The AutoVue API (previously called the “VueBean” API) is a great way to extend AutoVue Client/Server Deployment – specifically the client component – beyond the out-of-the-box capabilities and into new use-cases. In addition to having a solid grasp of J2SE programming, make sure to leverage the following resources if you’re developing or interested in developing customizations/extensions to AutoVue Client/Server Deployment: Programmer’s Guide: Before all else, read through the AutoVue API Programmer’s Guide to get an understanding of the architecture of the API. The Programmer’s Guide is included with the installation of AutoVue, and is posted on the Oracle Technology Network (OTN) website for the recent versions of AutoVue: http://www.oracle.com/technetwork/documentation/autovue-091442.html Javadocs: The AutoVue API Javadocs document the many packages, classes, and methods available to you. The Javadocs are included in the product installation under the \docs\JavaDocs\VueBean folder (easiest starting point is through the file index.html). Integrations Forum: If you have development questions that aren’t answered through the documentation, feel free to register and post in the public AutoVue Integrations Forum. For more information refer to the following blog post from October 2010: https://blogs.oracle.com/enterprisevisualization/entry/exciting_news_autovue_integrat Code Samples: Although the Oracle Support team’s scope of Support for API/customization topics is to answer questions regarding information already provided in the documentation (i.e. not to design or develop custom solutions), there are cases where Support comes across interesting samples or code snippets that may benefit various customers. In those cases, our Support team posts the samples into the Oracle knowledge base, and tracks them through a single reference note. The link to the KM Note depends on how you currently access the My Oracle Support portal: Flash interface: https://support.oracle.com/CSP/main/article?cmd=show&type=NOT&doctype=REFERENCE&id=1325990.1 (New) HTML interface: https://supporthtml.oracle.com/epmos/faces/ui/km/SearchDocDisplay.jspx?type=DOCUMENT&id=1325990.1 Happy coding!

    Read the article

  • Update Since Microsoft/PSC Office Open XML Case Study

    - by Tim Murphy
    In 2009 Microsoft released a case study about a project that we had done using the OOXML SDK 1.0 for Research Directors Inc.  Since that time Microsoft has released version 2.0 of the SDK and PSC has done significant development with it.  Below are some of the mile stones we have reached since the original case study. At the time of the original case study two report types had been automated to output as PowerPoint presentations.  Now that the all the main products have been delivered we have added three reports with Word document outputs and five more reports with PowerPoint outputs. One improvement we made over the original application was to create a PowerPoint Add-In which allows the users to tag a slide.  These tags along with the strongly typed SDK 2.0 allows for the code to use LINQ to easily search for slides in the template files.  This allows for a more flexible architecture base on assembling a presentation from copied slide extracted from the template. The new library we created also enabled us to create two new Word based reports in two weeks.  The library we created abstracts the generation of the documents from the business logic and the data retrieval.  The key to this is the mark up.  Content Controls are a good method for identifying sections of a template to be modified or replaced.  Join this with the concept of all data being generically either scalar or two dimensional and the code becomes more generic. In the end we found the OOXML SDK 2.0 to be a great tool for accelerating document generation development and creating happy clients.  del.icio.us Tags: PSC Group,OOXML,Case Study,Office Open XML,Word,PowerPoint

    Read the article

  • CodePlex Daily Summary for Friday, June 14, 2013

    CodePlex Daily Summary for Friday, June 14, 2013Popular ReleasesBlackJumboDog: Ver5.9.1: 2013.06.13 Ver5.9.1 (1) Web??????SSI?#include???、CGI?????????????????????? (2) ???????????????????????????BrightstarDB: BrightstarDB 1.3.40613: This is the first "official" BrightstarDB release under the MIT open source license. The code base has been reworked to replace / remove the use of third-party closed-source tools and has been updated to use a patched version of dotNetRDF 1.0 that includes the most recent updates for SPARQL 1.1 and Turtle. We have also extended the core RDF API to support targeting a specific graph with an update or query operation and made a change to the core profiling code to disable it by default, leadin...Lakana - WPF Framework: Lakana V2.1 RTM: - Dynamic text localization - A new application wide message busPokemon Battle Online: ETV: ETV???2012?12??????,????,???????$/PBO/branches/PrivateBeta??。 ???????bug???????。 ???? Server??????,?????。 ?????????,?????????????,?????????。 ????????,????,?????????,???????????(??)??。 ???? ????????????。 ???????。 ???PP????,????????????????????PP????,??3。 ?????????????,??????????。 ???????? ??? ?? ???? ??? ???? ?? ?????????? ?? ??? ??? ??? ???????? ???? ???? ???????????????、???????????,??“???????”??。 ???bug ???Web Pages CMS: 0.5.0.5: Added empty media directoryModern UI for WPF: Modern UI 1.0.4: The ModernUI assembly including a demo app demonstrating the various features of Modern UI for WPF. Related downloads NuGet ModernUI for WPF is also available as NuGet package in the NuGet gallery, id: ModernUI.WPF Download Modern UI for WPF Templates A Visual Studio 2012 extension containing a collection of project and item templates for Modern UI for WPF. The extension includes the ModernUI.WPF NuGet package. DownloadToolbox for Dynamics CRM 2011: XrmToolBox (v1.2013.6.11): XrmToolbox improvement Add exception handling when loading plugins Updated information panel for displaying two lines of text Tools improvementMetadata Document Generator (v1.2013.6.10)New tool Web Resources Manager (v1.2013.6.11)Retrieve list of unused web resources Retrieve web resources from a solution All tools listAccess Checker (v1.2013.2.5) Attribute Bulk Updater (v1.2013.1.17) FetchXml Tester (v1.2013.3.4) Iconator (v1.2013.1.17) Metadata Document Generator (v1.2013.6.10) Privilege...Document.Editor: 2013.23: What's new for Document.Editor 2013.23: New Insert Emoticon support Improved Format support Minor Bug Fix's, improvements and speed upsChristoc's DotNetNuke Module Development Template: DotNetNuke 7 Project Templates V2.4 for VS2012: V2.4 - Release Date 6/10/2013 Items addressed in this 2.4 release Updated MSBuild Community Tasks reference to 1.4.0.61 Setting up your DotNetNuke Module Development Environment Installing Christoc's DotNetNuke Module Development Templates Customizing the latest DotNetNuke Module Development Project TemplatesLayered Architecture Sample for .NET: Leave Sample - June 2013 (for .NET 4.5): Thank You for downloading Layered Architecture Sample. Please read the accompanying README.txt file for setup and installation instructions. This is the first set of a series of revised samples that will be released to illustrate the layered architecture design pattern. This version is only supported on Visual Studio 2012. This samples illustrates the use of ASP.NET Web Forms, ASP.NET Model Binding, Windows Communications Foundation (WCF), Windows Workflow Foundation (WF) and Microsoft Ente...Papercut: Papercut 2013-6-10: Feature: Shows From, To, Date and Subject of Email. Feature: Async UI and loading spinner. Enhancement: Improved speed when loading large attachments. Enhancement: Decoupled SMTP server into secondary assembly. Enhancement: Upgraded to .NET v4. Fix: Messages lost when received very fast. Fix: Email encoding issues on display/Automatically detect message Encoding Installation Note:Installation is copy and paste. Incoming messages are written to the start-up directory of Papercut. If you do n...Supporting Guidance and Whitepapers: v1.BETA Unit test Generator Documentation: Welcome to the Unit Test Generator Once you’ve moved to Visual Studio 2012, what’s a dev to do without the Create Unit Tests feature? Based on the high demand on User Voice for this feature to be restored, the Visual Studio ALM Rangers have introduced the Unit Test Generator Visual Studio Extension. The extension adds the “create unit test” feature back, with a focus on automating project creation, adding references and generating stubs, extensibility, and targeting of multiple test framewor...MapWindow 4: MapWindow GIS v4.8.8 - Release Candidate - 32Bit: Download the release notes here: http://svn.mapwindow.org/svnroot/MapWindow4Dev/Bin/MapWindowNotes.rtfLINQ to Twitter: LINQ to Twitter v2.1.06: Supports .NET 3.5, .NET 4.0, .NET 4.5, Silverlight 4.0, Windows Phone 7.1, Windows Phone 8, Client Profile, Windows 8, and Windows Azure. 100% Twitter API coverage. Also supports Twitter API v1.1! Also on NuGet.VR Player: VR Player 0.3.1 ALPHA: New plugin system with individual folders TrackIR support Maya and 3ds max formats support Dual screen support Mono layouts (left and right) Cylinder height parameter Barel effect factor parameter Razer hydra filter parameter VRPN bug fixes UI improvements Performances improvements Stabilization and logging with Log4Net New default values base on users feedback CTRL key to open menuSimCityPak: SimCityPak 0.1.0.8: SimCityPak 0.1.0.8 New features: Import BMP color palettes for vehicles Import RASTER file (uncompressed 8.8.8.8 DDS files) View different channels of RASTER files or preview of all layers combined Find text in javascripts TGA viewer Ground textures added to lot editor Many additional identified instances and propertiesWsus Package Publisher: Release v1.2.1306.09: Add more verifications on certificate validation. WPP will not let user to try publishing an update until the certificate is valid. Add certificate expiration date on the 'About' form. Filter Approbation to avoid a user to try to approve an update for uninstallation when the update do not support uninstallation. Add the server and console version on the 'About' form. WPP will not let user to publish an update until the server and console are not at the same level. WPP do not let user ...AJAX Control Toolkit: June 2013 Release: AJAX Control Toolkit Release Notes - June 2013 Release Version 7.0607June 2013 release of the AJAX Control Toolkit. AJAX Control Toolkit .NET 4.5 – AJAX Control Toolkit for .NET 4.5 and sample site (Recommended). AJAX Control Toolkit .NET 4 – AJAX Control Toolkit for .NET 4 and sample site (Recommended). AJAX Control Toolkit .NET 3.5 – AJAX Control Toolkit for .NET 3.5 and sample site (Recommended). Notes: - Instructions for using the AJAX Control Toolkit with ASP.NET 4.5 can be found at...Rawr: Rawr 5.2.1: This is the Downloadable WPF version of Rawr!For web-based version see http://elitistjerks.com/rawr.php You can find the version notes at: http://rawr.codeplex.com/wikipage?title=VersionNotes Rawr Addon (NOT UPDATED YET FOR MOP)We now have a Rawr Official Addon for in-game exporting and importing of character data hosted on Curse. The Addon does not perform calculations like Rawr, it simply shows your exported Rawr data in wow tooltips and lets you export your character to Rawr (including ba...VG-Ripper & PG-Ripper: PG-Ripper 1.4.13: changes NEW: Added Support for "ImageJumbo.com" links FIXED: Ripping of Threads with multiple pagesNew ProjectsControl de stock: sistema para control de stockEmails Outlook Mac Recovery Software That Is Provenly Better Than Others: Recover OLM Emails with Outlook Mac Recovery Software that restore Mac OLM files as well as Convert OLM files in EML and DBX file format. Horarios e Disciplinas: O projeto de Grade Horária do CEPE é desenvolvido por um analista programador experiente e dois estagiários ciosos de conhecimento nas linguagens C#, Razor, Entity Framework, AJAX.Hyper-v VHost and VM Inventory Reports: Get a detailed inventory/report from your VHost and it's VMs. Powershell script that dumps the report as a test file.jean0614changbranch: ddjean0614jabbrchangebranch: dLandscape: Planned Maintenance System, is a system being used for any fleet of assets where a regular maintenance is required.MongoCamp: MongoCamp for MongoDBNo-nonsense Flickr Uploader: Simple, easy to-use, stable, flickr uploader for large batch uploads.Outlook PST Converter to Export, Convert & Save PST in Other Formats: Outlook PST Converter to export Outlook emails in other formats from PST format which will be a better option to change format of Outlook PST file to PDF, MSG.Penrose Tiles: A simple windows forms based penrose tile generator.Ranch Master - an Object Oriented Progamming 2013 College Project: Raise the sheep by maintaining the Hunger & Thirst level low, keep the condition to 'Healthy', collect wools produced, trade the wools into points.SH Demo App: Summary!Stock Right: Initial releaseTesting Project: testingtestjabbr0613: testUnits of Measure: Simple solution to cope with units of measure in C# applications. You can develop your own unit system, be it SI-based or anything else that suits your needs.User Friendly Registration Plugin for DotNetNuke: This plugin makes DotNetNuke registration process more user friendly. Main idea is to inform user about possible mistakes in the registration form immediately,

    Read the article

  • Breaking The Promise of Web Service Interoperability

    The promise of web service interoperability is achievable if certain technical and non-technical issues are dealt with properly. As the world gets smaller and smaller thanks to our growing global economy the need for security is increasing. The use of security is vital in the transferring of data from one server to another. As new security standards and protocols are created, the environments for web service hosts and clients must be in sync so that they can communicate on the same standard and protocols. For example, if a new protocol x can only be implemented on computers built after 2010 then all computers built prior to 2010 will not be able to connect to any web service hosts that only use this protocol in its security policy. If both the host and client of a web service cannot communicate using a set of common standards and protocols then web services are not available to these clients thus breaking the promise of interoperability. Another limiting factor of web services is governmental policies and regulations. I have experienced this first hand last year when I had to work on a project that dealt with personally identifiable information (PII) regarding US and Canadian Citizens. Currently the Canadian government regulates that any data pertaining to Canadian citizens must be store in Canada only. The issue that we had was that fact that we are a US based company that sometimes works with Canadian PII as part of a service that we provide. As you can see we are US based company and dealing with Canadian Data, so we had to place a file server inside the border of Canada in order for us to continue working for our Canadian customers.

    Read the article

  • How to go from Mainframe to the Cloud?

    - by Ruma Sanyal
    Running applications on IBM mainframes is expensive, complex, and hinders IT responsiveness. The high costs from frequent forced upgrades, long integration cycles, and complex operations infrastructures can only be alleviated by migrating away from a mainframe environment.  Further, data centers are planning for cloud enablement pinned on principles of operating at significantly lower cost, very low upfront investment, operating on commodity hardware and open, standards based systems, and decoupling of hardware, infrastructure software, and business applications. These operating principles are in direct contrast with the principles of operating businesses on mainframes. By utilizing technologies such as Oracle Tuxedo, Oracle Coherence, and Oracle GoldenGate, businesses are able to quickly and safely migrate away from their IBM mainframe environments. Further, running Oracle Tuxedo and Oracle Coherence on Oracle Exalogic, the first and only integrated cloud machine on the market, Oracle customers can not only run their applications on standards-based open systems, significantly cutting their time to market and costs, they can start their journey of cloud enabling their mainframe applications. Oracle Tuxedo re-hosting tools and techniques can provide automated migration coverage for more than 95% of mainframe application assets, at a fraction of the cost Oracle GoldenGate can migrate data from mainframe systems to open systems, eliminating risks associated with the data migration Oracle Coherence hosts transactional data in memory providing mainframe-like data performance and linear scalability Running Oracle software on top of Oracle Exalogic empowers customers to start their journey of cloud enabling their mainframe applications Join us in a series of events across the globe where you you'll learn how you can build your enterprise cloud and add tremendous value to your business. In addition, meet with Oracle experts and your peers to discuss best practices and see how successful organizations are lowering total cost of ownership and achieving rapid returns by moving to the cloud. Register for the Oracle Fusion Middleware Forum event in a city new you!

    Read the article

  • Logic that can traverse all possible layouts, but not checking every combination of identical pieces?

    - by George Bailey
    Suppose we have a grid of arbitrary size, which is filled by blocks of various widths and heights. There are many 2x2 blocks (meaning they take a total of 4 cells in the grid) and many 3x3 blocks, as well as some 5x4, 4x5, 2x3, etc. I was hoping I could set up a program that would look at all possible layouts, and rank them, and find the best one. Simply it would look at all possible positions of these blocks, and see what setup is the best rank. (the rank based on how many of these can be connected by a roadway system of 1x1 road blocks, and how many squares can be left empty after this is done. - wanting to fit the most blocks as possible with the least roads.) My question, is how should I traverse all the possibilities? I could take all the blocks and try them one at a time, but since all 2x2 blocks are equal, and there are a couple dozen of them, there is no point in trying every combination there, as in the following AA BBB AA BBB CCBBB CCEEE DD EEE DD EEE is exactly the same as CC EEE CC EEE AAEEE AABBB DD BBB DD BBB You notice that there are 2 3x3 blocks and 3 2x2 blocks in my two examples. Based on the model I have now, the computer would try both of these combinations, as well as many others. The problem is that it is going to try every single possible variation of my couple dozen 2x2 blocks. And that is sorely inefficient. Is there a reasonable way to take out this duplicated work, somehow getting the computer program to treat all 2x2 blocks as equal/identical, instead of one requiring rearranging/swapping of these identical blocks? Can this be done?

    Read the article

  • You Can&rsquo;t Upload An Empty File To SharePoint 2007 Or SharePoint 2010

    - by Brian Jackett
    The title of this post is pretty self explanatory, but I thought it worth mentioning since I had never run across this rule until just recently.  A few weeks ago I was testing out a new workflow attached to a SharePoint 2007 document library.  I uploaded various file types to ensure all were handled properly.  One of the files I happened to test with was an empty .txt file to which I got the following error.      As you can see from the error message you aren’t allowed to upload a file that is empty.  Fast forward to this week when I was doing some research for my upcoming SharePoint 2010 beta exams.  I remembered that error I got a few weeks ago and decided to try out with SharePoint 2010 as well.  No surprises I got a similar error. Conclusion     Next time you are uploading files to a SharePoint 2007 or 2010 document library, make sure the file is not empty.  Coincidentally when I tweeted about this issue a few friends replied that they had also found this error recently.  I don’t know the internal reasoning why this is prevented but I assume it has something to do with how the blob for the file is stored in the database.  I assume that this would still be the case even if you had Remote Blob Storage (RBS) configured for your farm, but don’t have access to such a farm to confirm.  If anyone reading this does have access and wants to confirm that would be appreciated, just leave a comment.         -Frog Out

    Read the article

  • "Siebel2FusionCRM Integration" solution by ec4u (D)

    - by Richard Lefebvre
    ec4u, a CRM System Integration leader based in Germany and Switzerland, and an historical Oracle/Siebel partner, offers a complete "Siebel2FusionCRM Integration" solution, based on tools methodology and services. ec4u Siebel2FusionCRM Integration solution's main objectives are: Integration between Siebel (on-premise) and Fusion CRM / Marketing (“in the cloud”) Accounts, Contacts and Addresses are maintained by Sales in Siebel CRM and synchronized in real-time into Fusion CRM / Marketing CDM Processing ensures clean data for marketing campaigns (validation and deduplication) Create E-Mail marketing campaigns and newsletters in Fusion The solution features: Upsert processes figure out what information needs to be updated, inserted or terminated (deleted). However, as Siebel is the data master, it is still a one-way synchronization. Handle deleted or nullified information by terminating them in Fusion CRM (set start and end date to define the validity period) Initial load and real-time synchronization use the same processes Invocations/Operations can be repeated due to no transactional support from Fusion web services Tagging sub entries in case of 1 to N mapping (Example: Telephone number is one simple field in Siebel but in Fusion you can have multiple telephone numbers in a sub table) E-Mail-Notification in case of any error (containing error message, instance number, detailed payload) Schematron Validation Interested? Looking for more details or a partnership with ec4u for a "Siebel2FusionCRM Integration" project? Contact: Gregor Bublitz, Director Expert Services ([email protected])

    Read the article

  • Working with Timelines with LINQ to Twitter

    - by Joe Mayo
    When first working with the Twitter API, I thought that using SinceID would be an effective way to page through timelines. In practice it doesn’t work well for various reasons. To explain why, Twitter published an excellent document that is a must-read for anyone working with timelines: Twitter Documentation: Working with Timelines This post shows how to implement the recommended strategies in that document by using LINQ to Twitter. You should read the document in it’s entirety before moving on because my explanation will start at the bottom and work back up to the top in relation to the Twitter document. What follows is an explanation of SinceID, MaxID, and how they come together to help you efficiently work with Twitter timelines. The Role of SinceID Specifying SinceID says to Twitter, “Don’t return tweets earlier than this”. What you want to do is store this value after every timeline query set so that it can be reused on the next set of queries.  The next section will explain what I mean by query set, but a quick explanation is that it’s a loop that gets all new tweets. The SinceID is a backstop to avoid retrieving tweets that you already have. Here’s some initialization code that includes a variable named sinceID that will be used to populate the SinceID property in subsequent queries: // last tweet processed on previous query set ulong sinceID = 210024053698867204; ulong maxID; const int Count = 10; var statusList = new List<status>(); Here, I’ve hard-coded the sinceID variable, but this is where you would initialize sinceID from whatever storage you choose (i.e. a database). The first time you ever run this code, you won’t have a value from a previous query set. Initially setting it to 0 might sound like a good idea, but what if you’re querying a timeline with lots of tweets? Because of the number of tweets and rate limits, your query set might take a very long time to run. A caveat might be that Twitter won’t return an entire timeline back to Tweet #0, but rather only go back a certain period of time, the limits of which are documented for individual Twitter timeline API resources. So, to initialize SinceID at too low of a number can result in a lot of initial tweets, yet there is a limit to how far you can go back. What you’re trying to accomplish in your application should guide you in how to initially set SinceID. I have more to say about SinceID later in this post. The other variables initialized above include the declaration for MaxID, Count, and statusList. The statusList variable is a holder for all the timeline tweets collected during this query set. You can set Count to any value you want as the largest number of tweets to retrieve, as defined by individual Twitter timeline API resources. To effectively page results, you’ll use the maxID variable to set the MaxID property in queries, which I’ll discuss next. Initializing MaxID On your first query of a query set, MaxID will be whatever the most recent tweet is that you get back. Further, you don’t know what MaxID is until after the initial query. The technique used in this post is to do an initial query and then use the results to figure out what the next MaxID will be.  Here’s the code for the initial query: var userStatusResponse = (from tweet in twitterCtx.Status where tweet.Type == StatusType.User && tweet.ScreenName == "JoeMayo" && tweet.SinceID == sinceID && tweet.Count == Count select tweet) .ToList(); statusList.AddRange(userStatusResponse); // first tweet processed on current query maxID = userStatusResponse.Min( status => ulong.Parse(status.StatusID)) - 1; The query above sets both SinceID and Count properties. As explained earlier, Count is the largest number of tweets to return, but the number can be less. A couple reasons why the number of tweets that are returned could be less than Count include the fact that the user, specified by ScreenName, might not have tweeted Count times yet or might not have tweeted at least Count times within the maximum number of tweets that can be returned by the Twitter timeline API resource. Another reason could be because there aren’t Count tweets between now and the tweet ID specified by sinceID. Setting SinceID constrains the results to only those tweets that occurred after the specified Tweet ID, assigned via the sinceID variable in the query above. The statusList is an accumulator of all tweets receive during this query set. To simplify the code, I left out some logic to check whether there were no tweets returned. If  the query above doesn’t return any tweets, you’ll receive an exception when trying to perform operations on an empty list. Yeah, I cheated again. Besides querying initial tweets, what’s important about this code is the final line that sets maxID. It retrieves the lowest numbered status ID in the results. Since the lowest numbered status ID is for a tweet we already have, the code decrements the result by one to keep from asking for that tweet again. Remember, SinceID is not inclusive, but MaxID is. The maxID variable is now set to the highest possible tweet ID that can be returned in the next query. The next section explains how to use MaxID to help get the remaining tweets in the query set. Retrieving Remaining Tweets Earlier in this post, I defined a term that I called a query set. Essentially, this is a group of requests to Twitter that you perform to get all new tweets. A single query might not be enough to get all new tweets, so you’ll have to start at the top of the list that Twitter returns and keep making requests until you have all new tweets. The previous section showed the first query of the query set. The code below is a loop that completes the query set: do { // now add sinceID and maxID userStatusResponse = (from tweet in twitterCtx.Status where tweet.Type == StatusType.User && tweet.ScreenName == "JoeMayo" && tweet.Count == Count && tweet.SinceID == sinceID && tweet.MaxID == maxID select tweet) .ToList(); if (userStatusResponse.Count > 0) { // first tweet processed on current query maxID = userStatusResponse.Min( status => ulong.Parse(status.StatusID)) - 1; statusList.AddRange(userStatusResponse); } } while (userStatusResponse.Count != 0 && statusList.Count < 30); Here we have another query, but this time it includes the MaxID property. The SinceID property prevents reading tweets that we’ve already read and Count specifies the largest number of tweets to return. Earlier, I mentioned how it was important to check how many tweets were returned because failing to do so will result in an exception when subsequent code runs on an empty list. The code above protects against this problem by only working with the results if Twitter actually returns tweets. Reasons why there wouldn’t be results include: if the first query got all the new tweets there wouldn’t be more to get and there might not have been any new tweets between the SinceID and MaxID settings of the most recent query. The code for loading the returned tweets into statusList and getting the maxID are the same as previously explained. The important point here is that MaxID is being reset, not SinceID. As explained in the Twitter documentation, paging occurs from the newest tweets to oldest, so setting MaxID lets us move from the most recent tweets down to the oldest as specified by SinceID. The two loop conditions cause the loop to continue as long as tweets are being read or a max number of tweets have been read.  Logically, you want to stop reading when you’ve read all the tweets and that’s indicated by the fact that the most recent query did not return results. I put the check to stop after 30 tweets are reached to keep the demo from running too long – in the console the response scrolls past available buffer and I wanted you to be able to see the complete output. Yet, there’s another point to be made about constraining the number of items you return at one time. The Twitter API has rate limits and making too many queries per minute will result in an error from twitter that LINQ to Twitter raises as an exception. To use the API properly, you’ll have to ensure you don’t exceed this threshold. Looking at the statusList.Count as done above is rather primitive, but you can implement your own logic to properly manage your rate limit. Yeah, I cheated again. Summary Now you know how to use LINQ to Twitter to work with Twitter timelines. After reading this post, you have a better idea of the role of SinceID - the oldest tweet already received. You also know that MaxID is the largest tweet ID to retrieve in a query. Together, these settings allow you to page through results via one or more queries. You also understand what factors affect the number of tweets returned and considerations for potential error handling logic. The full example of the code for this post is included in the downloadable source code for LINQ to Twitter.   @JoeMayo

    Read the article

  • how to change some of the numbers in word to be arabic numbers without changing a global setting in windows?

    - by Karim
    i have a word document. it have 2 parts one english and one arabic. the problem is that all the numbers are english numbers [0123456789] but i want the arabic part's numbers to be arabic numbers [??????????] how can i do that in word 2007 or 2010? thanks Edit: since i didnt receive any response to the question i created a software that converts english numbers to arabic and then i use it to convert the numbers in the document. but still wondering if there is a more easy way to do it?

    Read the article

  • Internal Mic not working on Dell Adamo 13

    - by AFD
    So I'm using Elementary Luna Beta (based on Ubuntu 12.04 LTS) on a spare HDD after successfully using Jupiter release (based on 10.10) for many years. In Jupiter I had my internal mic work out of the box and with Skype installed from deb it was all setup without having to step foot inside the terminal. In Luna Beta the internal mic is not recognised by the OS and so also not recognised by Skype. I believe the hardware is this (from sudo lshw): *-multimedia description: Audio device product: 82801I (ICH9 Family) HD Audio Controller vendor: Intel Corporation physical id: 1b bus info: pci@0000:00:1b.0 version: 03 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list configuration: driver=HDA Intel latency=0 resources: irq:46 memory:f8600000-f8603fff As well as this I ran cat /proc/asound/card0/codec* | grep Codec which gave me: Codec: IDT 92HD73C1X5 Codec: Intel Cantiga HDMI How do I tweak Luna to get this hardware working properly? I'm able to switch out my Luna HDD with the Jupiter HDD to help troubleshoot what the differences are between the two and why the more recent OS can't find / use the mic correctly. Thanks in advance for any help you can give.

    Read the article

  • What are the hard and fast rules for Cache Control?

    - by Metalshark
    Confession: sites I maintain have different rules for Cache Control mostly based on the default configuration of the server followed up with recommendations from the Page Speed & Y-Slow Firefox plug-ins and the Network Resources view in Google's Speed Tracer. Cache-Control is set to private/public depending on what they say to do, ETag's/Last-Modified headers are only tinkered with if Y-Slow suggests there is something wrong and Vary-Accept-Encoding seems necessary when manually gziping files for Amazon CloudFront. When reading through the material on the different options and what they do there seems to be conflicting information, rules for broken proxies and cargo cult configurations. Any of the official information provided by the analysis tools mentioned above is quite inaccessible as it deals with each topic individually instead of as a unified strategy (so there is no cross-referencing of techniques). For example, it seems to make no sense that the speed analysis tools rate a site with ETag's the same as a site without them if they are meant to help with caching. What are the hard and fast rules for a platform agnostic Cache Control strategy? EDIT: A link through Jeff Atwood's article explains Caching in superb depth. For the record though here are the hard and fast rules: If the file is Compressed using GZIP, etc - use "cache-control: private" as a proxy may return the compressed version to a client that does not support it (the browser cache will hold files marked this way though). Also remember to include a "Vary: Accept-Encoding" to say that it is compressible. Use Last-Modified in conjunction with ETag - belt and braces usage provides both validators, whilst ETag is based on file contents instead of modification time alone, using both covers all bases. NOTE: AOL's PageTest has a carte blanche approach against ETags for some reason. If you are using Apache on more than one server to host the same content then remove the implicitly declared inode from ETags by excluding it from the FileETag directive (i.e. "FileETag MTime Size") unless you are genuinely using the same live filesystem. Use "cache-control: public" wherever you can - this means that proxy servers (and the browser cache) will return your content even if the rest of the page needs HTTP authentication, etc.

    Read the article

  • Portal And Content – Introduction (1 of 7)

    - by Stefan Krantz
    The coming post over the next two months will be included in a new series. The idea is to help the reader to understand how to enable a versatile and manageable portal. Each post will go through a specific use case or lifecycle group of events that a Content Driven Portal requires the development team to consider. The current planning is to deliver following subjects, each topic will be enclosed in a separate blog post. Introduction – Introduction to the series of posts and what to expect at the end of the series Components, part 1 – UCM, Site Studio and high level introduction to content templates Components, part 2 – Page Templates and  Navigation model Components, part 3 – Applied Customization Framework for Content Presenter Taskflows Scenario 1 – Enable a Portal for runtime administration Scenario 2 – Enable a Portal for Internationalization Scenario 3 – Enable a Portal for Content Workflows Background This post series has been issued to help customers, partners and consultants to understand the concept of a WebCenter Portal project where the main focus or a majority of the portal has content interaction. Today the most portal installations Oracle WebCenter Portal is involved in have a vast majority of content based pages. Many of the Portal projects have or will run into challenges, to mitigate these challenges the portal and content lifecycle has to be well designed. The coming posts will address the main components that should be involved when creating such scenarios; it will also go into details on the process by describing three solution scenarios. The aim with the scenarios is to give the reader a more hands on understanding of the concept of building and architecting a Content Driven Portal. The selected scenarios are selected based on the most common use cases that we have identified until today.

    Read the article

  • Tips for achieving "continual" delivery

    - by Ben
    A team is experiencing difficulty releasing software on a frequent basis (once every week). What follows is a typical release timeline: During the iteration: Developers work on stories on the backlog on short-lived (this is enthusiastically enforced) feature branches based on the master branch. Developers frequently pull their feature branches into the integration branch, which is continually built and tested (as far as the test coverage goes) automatically. The testers have the ability to auto-deploy integration to a staging environment and this occurs multiple times per week, enabling continual running of their test suites. Every Monday: there is a release planning meeting to determine which stories are "known good" (based on the testers' work), and hence will be in the release. If there is a known issue with a story, the source branch is pulled out of integration. no new code (only bug fixes requested by the testers) may be pulled into integration on this Monday to ensure the testers have a stable codebase to cut a release from. Every Tuesday: The testers have tested the integration branch as much as they possibly can have given the time available and there are no known bugs so a release is cut and pushed out to the production nodes slowly. This sounds OK in practise, but we have found that it is incredibly difficult to achieve. The team sees the following symptoms "subtle" bugs are found on production that were not identified on the staging environment. last minute hot-fixes continue into the Tuesday. problems on the production environment require roll-backs which blocks continued development until a successful live deployment is achieved and the master branch can be updated (and hence branched from). I think test coverage, code quality, ability to regression test quickly, last minute changes and environmental differences are at play here. Can anyone offer any advice regarding how best to achieve "continual" delivery?

    Read the article

  • Parameters for selection of Operating system, memory and processor for embedded system ?

    - by James
    I am developing an embedded real time system software (in C language). I have designed the s/w architecture - we know various objects required, interactions required between various objects and IPC communication between tasks. Based on this information, i need to decide on the operating system(RTOS), microprocessor and memory size requirements. (Most likely i would be using Quadros, as it has been suggested by the client based on their prior experience in similar projects) But i am confused about which one to begin with, since choice of one could impact the selection of other. Could you also guide me on parameters to consider to estimate the memory requirements from the s/w design (lower limit and upper limit of memory requirement) ? (Cost of the component(s) could be ignored for this evaluation)

    Read the article

  • Generating documents with templating from a form

    - by Anna
    Hello, I would like to create a document generator with templating. The workflow should be as following: The user input data to a static form (simple text input). The user chooses a graphically designed template. A document with the chosen template containing the user data is generated. The initial templates repository is prepared in advance, but it should be easy to add new templates to the process. I have the full MS Office suite and the preferred file format is an MS .doc. I can do a little VB scripting if needed, but I prefer not to. Any advice would be greatly appreciated. Thank you, Anna

    Read the article

  • Data Structure for Small Number of Agents in a Relatively Big 2D World

    - by Seçkin Savasçi
    I'm working on a project where we will implement a kind of world simulation where there is a square 2D world. Agents live on this world and make decisions like moving or replicating themselves based on their neighbor cells(world=grid) and some extra parameters(which are not based on the state of the world). I'm looking for a data structure to implement such a project. My concerns are : I will implement this 3 times: sequential, using OpenMP, using MPI. So if I can use the same structure that will be quite good. The first thing comes up is keeping a 2D array for the world and storing agent references in it. And simulate the world for each time slice by checking every cell in each iteration and further processing if an agents is found in the cell. The downside is what if I have 1000x1000 world and only 5 agents in it. It will be an overkill for both sequential and parallel versions to check each cell and look for possible agents in them. I can use quadtree and store agents in it, but then how can I get the information about neighbor cells then? Please let me know if I should elaborate more.

    Read the article

  • SOA Forcing A Shift In IT Governance

    As more and more companies adopt a service oriented approach to developing and maintaining existing enterprise systems, IT governance also needs to shift its philosophies to fit the emerging development paradigm. When I first started programming companies placed an emphasis on “Code and Go” software development style. They only developed for current problems and did not really take a look at how the company could leverage some of the code we were developing across the entire enterprise system.  The concept of Service Oriented Architecture (SOA) has dramatically shifted how we develop enterprise software with emphasizing software processes as company assets. This has driven some to start developing new components as processes strictly for the possibility of future integration of existing and new systems. I personally like this new paradigm because it truly promotes code reusability. However, most enterprise level IT governance polices were created prior to the introduction of SOA in their respected organization. This can create a sense of the Wild West for developers working on projects related to SOA. This is due to the fact that a lot of the standards and polices implemented by enterprise IT governing boards were initially for developing under the “Code and Go” paradigm and do not take in to account idiosyncrasies found in the SOA/integration based development. As IT governance moves forward its focus should aim more for “Develop to Integrate” versus “Code and Go” philosophies. Examples of “Develop to Integrate” Philosophy: Defining preferred data transfer methodologies (XML vs. JSON), and when to use them Updating security best practices for exposing public services based on existing standard security policies Define when to use create new SOA project vs. implementing localized components that could be reused elsewhere in the enterprise.

    Read the article

  • Printing the second page from a Calc sheet

    - by Luke
    I've created a simple invoice document in LibreOffice Calc and it consists of 2 pages. I have defined the print range for my document as: $A$1:$D$33,$D$34:$D$35 My first page holds the actual invoice information the second page is a single merged cell holding terms and conditions text (wrapped to the cell width). The second page is defined by a row break. When I export the sheet as a PDF the first page comes out great but the second page with the terms text is all wrong. On the left hand side I see a portion of text (looks like a single column) and when I select the text inside the PDF I can see it go of the page somewhere to the left. I get the same result in a print preview. I'm at a complete loss on how to approach this problem and any insight is much appreciated.

    Read the article

  • Anonymouse VS Logged in users on my site & Google Analytics

    - by Flowpoke
    I'd like to be able to run two different 'tracks' for Google Analytics; One for anonymous users of the site and another for Users whom are logged-in. I say "track" because Im not sure of the term--but I definitely know I want it to all be in the same "Analytics Account", I just want to segregate my logged-in users... In the site template, I can very easily add a conditional to display one or the other (Analytics code snippet)... Which Im hoping this comes down to and although Im not sure, it seems that the last digit in your Analytics ID (e.g. UA-15XXXX0-X) could be incremented to gain such additional 'tracks'....? Any tips? Am I doin it wrong? My current footer snippet: <script type="text/javascript"> var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www."); document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E")); </script> <script type="text/javascript"> try { var pageTracker = _gat._getTracker("UA-XXXXXXX-1"); pageTracker._trackPageview(); } catch(err) {} </script>

    Read the article

  • Top 25 security issues for developers of web sites

    - by BizTalk Visionary
    Sourced from: CWE This is a brief listing of the Top 25 items, using the general ranking. NOTE: 16 other weaknesses were considered for inclusion in the Top 25, but their general scores were not high enough. They are listed in the On the Cusp focus profile. Rank Score ID Name [1] 346 CWE-79 Failure to Preserve Web Page Structure ('Cross-site Scripting') [2] 330 CWE-89 Improper Sanitization of Special Elements used in an SQL Command ('SQL Injection') [3] 273 CWE-120 Buffer Copy without Checking Size of Input ('Classic Buffer Overflow') [4] 261 CWE-352 Cross-Site Request Forgery (CSRF) [5] 219 CWE-285 Improper Access Control (Authorization) [6] 202 CWE-807 Reliance on Untrusted Inputs in a Security Decision [7] 197 CWE-22 Improper Limitation of a Pathname to a Restricted Directory ('Path Traversal') [8] 194 CWE-434 Unrestricted Upload of File with Dangerous Type [9] 188 CWE-78 Improper Sanitization of Special Elements used in an OS Command ('OS Command Injection') [10] 188 CWE-311 Missing Encryption of Sensitive Data [11] 176 CWE-798 Use of Hard-coded Credentials [12] 158 CWE-805 Buffer Access with Incorrect Length Value [13] 157 CWE-98 Improper Control of Filename for Include/Require Statement in PHP Program ('PHP File Inclusion') [14] 156 CWE-129 Improper Validation of Array Index [15] 155 CWE-754 Improper Check for Unusual or Exceptional Conditions [16] 154 CWE-209 Information Exposure Through an Error Message [17] 154 CWE-190 Integer Overflow or Wraparound [18] 153 CWE-131 Incorrect Calculation of Buffer Size [19] 147 CWE-306 Missing Authentication for Critical Function [20] 146 CWE-494 Download of Code Without Integrity Check [21] 145 CWE-732 Incorrect Permission Assignment for Critical Resource [22] 145 CWE-770 Allocation of Resources Without Limits or Throttling [23] 142 CWE-601 URL Redirection to Untrusted Site ('Open Redirect') [24] 141 CWE-327 Use of a Broken or Risky Cryptographic Algorithm [25] 138 CWE-362 Race Condition Cross-site scripting and SQL injection are the 1-2 punch of security weaknesses in 2010. Even when a software package doesn't primarily run on the web, there's a good chance that it has a web-based management interface or HTML-based output formats that allow cross-site scripting. For data-rich software applications, SQL injection is the means to steal the keys to the kingdom. The classic buffer overflow comes in third, while more complex buffer overflow variants are sprinkled in the rest of the Top 25.

    Read the article

  • Ranking drop after using reverse proxy for blog subdirectory and robots.txt for old blog subdomain

    - by user40387
    We have a 3Dcart store and a WordPress blog hosted on a separate server. Originally, we had a CNAME set up to point the blog to http://blog.example.com/. However, in our attempt to boost link-based and traffic-based authority on the main site, we've opted to do a reverse proxy to http://www.example.com/blog/. It’s been about two months since we finished the reverse proxy migration. It appears that everything is technically working as intended, including some robots and sitemap changes; the new URLs are even generating some traffic, as indicated on Google Analytics. While Google has been indexing the new URL locations, they’re ranking very poorly, even for non-competitive, long-tail keywords. Meanwhile, the old subdomain URLs are still ranking mostly as well as they used to (even though they aren’t showing meta titles and descriptions due to being blocked by robots.txt). Our working theory is that Google has an old index of the subdomain URLs, and is considering the new URLs to be duplicate content, since it’s being told not to crawl the subdomain and therefore can’t see the rel canonicals we have in place. To resolve this, we’ve updated the subdomain’s robot.txt to no longer block crawling and indexing. Theoretically, seeing the canonical tag on the subdomain pages will resolve any perceived duplicate content issues. In the meantime, we were wondering if anyone would have any other ideas. We are very concerned that we’ll be losing valuable traffic, as we’re entering our on season at the moment.

    Read the article

< Previous Page | 335 336 337 338 339 340 341 342 343 344 345 346  | Next Page >