Search Results

Search found 15887 results on 636 pages for 'functions as parameters'.

Page 237/636 | < Previous Page | 233 234 235 236 237 238 239 240 241 242 243 244  | Next Page >

  • What a Performance! MySQL 5.5 and InnoDB 1.1 running on Oracle Linux

    - by zeynep.koch(at)oracle.com
    The MySQL performance team in Oracle has recently completed a series of benchmarks comparing Read / Write and Read-Only performance of MySQL 5.5 with the InnoDB and MyISAM storage engines. Compared to MyISAM, InnoDB delivered 35x higher throughput on the Read / Write test and 5x higher throughput on the Read-Only test, with 90% scalability across 36 CPU cores. A full analysis of results and MySQL configuration parameters are documented in a new whitepaperIn addition to the benchmark, the new whitepaper, also includes:- A discussion of the use-cases for each storage engine- Best practices for users considering the migration of existing applications from MyISAM to InnoDB- A summary of the performance and scalability enhancements introduced with MySQL 5.5 and InnoDB 1.1.The benchmark itself was based on Sysbench, running on AMD Opteron "Magny-Cours" processors, and Oracle Linux with the Unbreakable Enterprise Kernel You can learn more about MySQL 5.5 and InnoDB 1.1 from here and download it from here to test whether you witness performance gains in your real-world applications.  By Mat Keep

    Read the article

  • Creating packages in code – Execute SQL Task

    The Execute SQL Task is for obvious reasons very well used, so I thought if you are building packages in code the chances are you will be using it. Using the task basic features of the task are quite straightforward, add the task and set some properties, just like any other. When you start interacting with variables though it can be a little harder to grasp so these samples should see you through. Some of these more advanced features are explained in much more detail in our ever popular post The Execute SQL Task, here I’ll just be showing you how to implement them in code. The abbreviated code blocks below demonstrate the different features of the task. The complete code has been encapsulated into a sample class which you can download (ExecSqlPackage.cs). Each feature described has its own method in the sample class which is mentioned after the code block. This first sample just shows adding the task, setting the basic properties for a connection and of course an SQL statement. Package package = new Package(); // Add the SQL OLE-DB connection ConnectionManager sqlConnection = AddSqlConnection(package, "localhost", "master"); // Add the SQL Task package.Executables.Add("STOCK:SQLTask"); // Get the task host wrapper TaskHost taskHost = package.Executables[0] as TaskHost; // Set required properties taskHost.Properties["Connection"].SetValue(taskHost, sqlConnection.ID); taskHost.Properties["SqlStatementSource"].SetValue(taskHost, "SELECT * FROM sysobjects"); For the full version of this code, see the CreatePackage method in the sample class. The AddSqlConnection method is a helper method that adds an OLE-DB connection to the package, it is of course in the sample class file too. Returning a single value with a Result Set The following sample takes a different approach, getting a reference to the ExecuteSQLTask object task itself, rather than just using the non-specific TaskHost as above. Whilst it means we need to add an extra reference to our project (Microsoft.SqlServer.SQLTask) it makes coding much easier as we have compile time validation of any property and types we use. For the more complex properties that is very valuable and saves a lot of time during development. The query has also been changed to return a single value, one row and one column. The sample shows how we can return that value into a variable, which we also add to our package in the code. To do this manually you would set the Result Set property on the General page to Single Row and map the variable on the Result Set page in the editor. Package package = new Package(); // Add the SQL OLE-DB connection ConnectionManager sqlConnection = AddSqlConnection(package, "localhost", "master"); // Add the SQL Task package.Executables.Add("STOCK:SQLTask"); // Get the task host wrapper TaskHost taskHost = package.Executables[0] as TaskHost; // Add variable to hold result value package.Variables.Add("Variable", false, "User", 0); // Get the task object ExecuteSQLTask task = taskHost.InnerObject as ExecuteSQLTask; // Set core properties task.Connection = sqlConnection.Name; task.SqlStatementSource = "SELECT id FROM sysobjects WHERE name = 'sysrowsets'"; // Set single row result set task.ResultSetType = ResultSetType.ResultSetType_SingleRow; // Add result set binding, map the id column to variable task.ResultSetBindings.Add(); IDTSResultBinding resultBinding = task.ResultSetBindings.GetBinding(0); resultBinding.ResultName = "id"; resultBinding.DtsVariableName = "User::Variable"; For the full version of this code, see the CreatePackageResultVariable method in the sample class. The other types of Result Set behaviour are just a variation on this theme, set the property and map the result binding as required. Parameter Mapping for SQL Statements This final example uses a parameterised SQL statement, with the coming from a variable. The syntax varies slightly between connection types, as explained in the Working with Parameters and Return Codes in the Execute SQL Taskhelp topic, but OLE-DB is the most commonly used, for which a question mark is the parameter value placeholder. Package package = new Package(); // Add the SQL OLE-DB connection ConnectionManager sqlConnection = AddSqlConnection(package, ".", "master"); // Add the SQL Task package.Executables.Add("STOCK:SQLTask"); // Get the task host wrapper TaskHost taskHost = package.Executables[0] as TaskHost; // Get the task object ExecuteSQLTask task = taskHost.InnerObject as ExecuteSQLTask; // Set core properties task.Connection = sqlConnection.Name; task.SqlStatementSource = "SELECT id FROM sysobjects WHERE name = ?"; // Add variable to hold parameter value package.Variables.Add("Variable", false, "User", "sysrowsets"); // Add input parameter binding task.ParameterBindings.Add(); IDTSParameterBinding parameterBinding = task.ParameterBindings.GetBinding(0); parameterBinding.DtsVariableName = "User::Variable"; parameterBinding.ParameterDirection = ParameterDirections.Input; parameterBinding.DataType = (int)OleDBDataTypes.VARCHAR; parameterBinding.ParameterName = "0"; parameterBinding.ParameterSize = 255; For the full version of this code, see the CreatePackageParameterVariable method in the sample class. You’ll notice the data type has to be specified for the parameter IDTSParameterBinding .DataType Property, and these type codes are connection specific too. My enumeration I wrote several years ago is shown below was probably done by reverse engineering a package and also the API header file, but I recently found a very handy post that covers more connections as well for exactly this, Setting the DataType of IDTSParameterBinding objects (Execute SQL Task). /// <summary> /// Enumeration of OLE-DB types, used when mapping OLE-DB parameters. /// </summary> private enum OleDBDataTypes { BYTE = 0x11, CURRENCY = 6, DATE = 7, DB_VARNUMERIC = 0x8b, DBDATE = 0x85, DBTIME = 0x86, DBTIMESTAMP = 0x87, DECIMAL = 14, DOUBLE = 5, FILETIME = 0x40, FLOAT = 4, GUID = 0x48, LARGE_INTEGER = 20, LONG = 3, NULL = 1, NUMERIC = 0x83, NVARCHAR = 130, SHORT = 2, SIGNEDCHAR = 0x10, ULARGE_INTEGER = 0x15, ULONG = 0x13, USHORT = 0x12, VARCHAR = 0x81, VARIANT_BOOL = 11 } Download Sample code ExecSqlPackage.cs (10KB)

    Read the article

  • Documentation utility for OpenEdge ABL

    - by glowcoder
    I have a large system in OpenEdge ABL that could use some documentation-love. Currently a team member is working on a utility that can find methods and functions and make some "Javadoc-esque" html pages out of it. It's pretty rough around the edges. Okay, it's like sawblades around the edges. I'm trying to find something like Javadoc or Doxygen that is capable of parsing OpenEdge ABL to generate some kind of API documentation. I know the market for OpenEdge isn't the best, but there is a lot of stuff that's passed along by word of mouth. It's difficult to search for because it used to be called "Progress" which throws off your search queries with non-relevant information. I'm also open to a system that lets you define the regex's to look for to define your own syntax. Then it parses and gives you an output based on that. Thanks!

    Read the article

  • Automatic Properties, Collection Initializers, and Implicit Line Continuation support with VB 2010

    [In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu] This is the eighteenth in a series of blog posts Im doing on the upcoming VS 2010 and .NET 4 release. A few days ago I blogged about two new language features coming with C# 4.0: optional parameters and named arguments.  Today Im going to post about a few of my favorite new features being added to VB with VS 2010: Auto-Implemented Properties, Collection...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Non-perfect maze generation algorithm

    - by Shylux
    I want to generate a maze with the following properties: The maze is non-perfect. Means it has loops and multiple ways to reach the exit. The maze should be random. The algorithm should output different mazes for different input parameters The maze doesn't have to be braided. Means dead-ends are allowed and appreciated. I just can't find the right resources on google. The closest i found was this description of the different types of algorithms: http://www.astrolog.org/labyrnth/algrithm.htm. All other algorithms were for perfect mazes. Can anyone give me a website where i can look this up or maybe an algorithm directly?

    Read the article

  • Ask How-To Geek: Clone a Disk, Resize Static Windows, and Create System Function Shortcuts

    - by Jason Fitzpatrick
    This week we take a look at how to clone a hard disk for easy backup or duplication, resize stubbornly static windows, and create shortcuts for dozens of Windows functions. Once a week we dip into our reader mailbag and help readers solve their problems, sharing the useful solutions with you in the process. Read on to see our fixes for this week’s reader dilemmas. Latest Features How-To Geek ETC HTG Projects: How to Create Your Own Custom Papercraft Toy How to Combine Rescue Disks to Create the Ultimate Windows Repair Disk What is Camera Raw, and Why Would a Professional Prefer it to JPG? The How-To Geek Guide to Audio Editing: The Basics How To Boot 10 Different Live CDs From 1 USB Flash Drive The 20 Best How-To Geek Linux Articles of 2010 ShapeShifter: What Are Dreams? [Video] This Computer Runs on Geek Power Wallpaper Bones, Clocks, and Counters; A Look at the First 35,000 Years of Computing Arctic Theme for Windows 7 Gives Your Desktop an Icy Touch Install LibreOffice via PPA and Receive Auto-Updates in Ubuntu Creative Portraits Peek Inside the Guts of Modern Electronics

    Read the article

  • Your thoughts on Best Practices for Scientific Computing?

    - by John Smith
    A recent paper by Wilson et al (2014) pointed out 24 Best Practices for scientific programming. It's worth to have a look. I would like to hear opinions about these points from experienced programmers in scientific data analysis. Do you think these advices are helpful and practical? Or are they good only in an ideal world? Wilson G, Aruliah DA, Brown CT, Chue Hong NP, Davis M, Guy RT, Haddock SHD, Huff KD, Mitchell IM, Plumbley MD, Waugh B, White EP, Wilson P (2014) Best Practices for Scientific Computing. PLoS Biol 12:e1001745. http://www.plosbiology.org/article/info%3Adoi%2F10.1371%2Fjournal.pbio.1001745 Box 1. Summary of Best Practices Write programs for people, not computers. (a) A program should not require its readers to hold more than a handful of facts in memory at once. (b) Make names consistent, distinctive, and meaningful. (c) Make code style and formatting consistent. Let the computer do the work. (a) Make the computer repeat tasks. (b) Save recent commands in a file for re-use. (c) Use a build tool to automate workflows. Make incremental changes. (a) Work in small steps with frequent feedback and course correction. (b) Use a version control system. (c) Put everything that has been created manually in version control. Don’t repeat yourself (or others). (a) Every piece of data must have a single authoritative representation in the system. (b) Modularize code rather than copying and pasting. (c) Re-use code instead of rewriting it. Plan for mistakes. (a) Add assertions to programs to check their operation. (b) Use an off-the-shelf unit testing library. (c) Turn bugs into test cases. (d) Use a symbolic debugger. Optimize software only after it works correctly. (a) Use a profiler to identify bottlenecks. (b) Write code in the highest-level language possible. Document design and purpose, not mechanics. (a) Document interfaces and reasons, not implementations. (b) Refactor code in preference to explaining how it works. (c) Embed the documentation for a piece of software in that software. Collaborate. (a) Use pre-merge code reviews. (b) Use pair programming when bringing someone new up to speed and when tackling particularly tricky problems. (c) Use an issue tracking tool. I'm relatively new to serious programming for scientific data analysis. When I tried to write code for pilot analyses of some of my data last year, I encountered tremendous amount of bugs both in my code and data. Bugs and errors had been around me all the time, but this time it was somewhat overwhelming. I managed to crunch the numbers at last, but I thought I couldn't put up with this mess any longer. Some actions must be taken. Without a sophisticated guide like the article above, I started to adopt "defensive style" of programming since then. A book titled "The Art of Readable Code" helped me a lot. I deployed meticulous input validations or assertions for every function, renamed a lot of variables and functions for better readability, and extracted many subroutines as reusable functions. Recently, I introduced Git and SourceTree for version control. At the moment, because my co-workers are much more reluctant about these issues, the collaboration practices (8a,b,c) have not been introduced. Actually, as the authors admitted, because all of these practices take some amount of time and effort to introduce, it may be generally hard to persuade your reluctant collaborators to comply them. I think I'm asking your opinions because I still suffer from many bugs despite all my effort on many of these practices. Bug fix may be, or should be, faster than before, but I couldn't really measure the improvement. Moreover, much of my time has been invested on defence, meaning that I haven't actually done much data analysis (offence) these days. Where is the point I should stop at in terms of productivity? I've already deployed: 1a,b,c, 2a, 3a,b,c, 4b,c, 5a,d, 6a,b, 7a,7b I'm about to have a go at: 5b,c Not yet: 2b,c, 4a, 7c, 8a,b,c (I could not really see the advantage of using GNU make (2c) for my purpose. Could anyone tell me how it helps my work with MATLAB?)

    Read the article

  • Adopting DBVCS

    - by Wes McClure
    Identify early adopters Pick a small project with a small(ish) team.  This can be a legacy application or a green-field application. Strive to find a team of early adopters that will be eager to try something new. Get the team on board! Research Research the tool(s) that you want to use.  Some tools provide all of the features you would need while some only provide a slice of the pie.  DBVCS requires the ability to manage a set of change scripts that update a database from one version to the next.  Ideally a tool can track database versions and automatically apply updates.  The change script generation process can be manual, but having diff tools available to automatically generate it can really reduce the overhead to adoption.  Finally, an automated tool to generate a script file per database object is an added bonus as your version control system can quickly identify what was changed in a commit (add/del/modify), just like with code changes. Don’t settle on just one tool, identify several.  Then work with the team to evaluate the tools.  Have the team do some tests of the following scenarios with each tool: Baseline an existing database: can the migration tool work with legacy databases?  Caution: most migration platforms do not support baselines or have poor support, especially the fad of fluent APIs. Add/drop tables Add/drop procedures/functions/views Alter tables (rename columns, add columns, remove columns) Massage data – migrations sometimes involve changing data types that cannot be implicitly casted and require you to decide how the data is explicitly cast to the new type.  This is a requirement for a migrations platform.  Think about a case where you might want to combine fields, or move a field from one table to another, you wouldn’t want to lose the data. Run the tool via the command line.  If you cannot automate the tool in Continuous Integration what is the point? Create a copy of a database on demand. Backup/restore databases locally. Let the team give feedback and decide together, what tool they would like to try out. My recommendation at this point would be to include TSqlMigrations and RoundHouse as SQL based migration platforms.  In general I would recommend staying away from the fluent platforms as they often lack baseline capabilities and add overhead to learn a new API when SQL is already a very well known DSL.  Code migrations often get messy with procedures/views/functions as these have to be created with SQL and aren’t cross platform anyways.  IMO stick to SQL based migrations. Reconciling Production If your project is a legacy application, you will need to reconcile the current state of production with your development databases.  Find changes in production and bring them down to development, even if they are old and need to be removed.  Once complete, produce a baseline of either dev or prod as they are now in sync.  Commit this to your VCS of choice. Add whatever schema changes tracking mechanism your tool requires to your development database.  This often requires adding a table to track the schema version of that database.  Your tool should support doing this for you.  You can add this table to production when you do your next release. Script out any changes currently in dev.  Remove production artifacts that you brought down during reconciliation.  Add change scripts for any outstanding changes in dev since the last production release.  Commit these to your repository.   Say No to Shared Dev DBs Simply put, you wouldn’t dream of sharing a code checkout, why would you share a development database?  If you have a shared dev database, back it up, distribute the backups and take the shared version offline (including the dev db server once all projects are using DB VCS).  Doing DB VCS with a shared database is bound to cause problems as people won’t be able to easily script out their own changes from those that others are working on.   First prod release Copy prod to your beta/testing environment.  Add the schema changes table (or mechanism) and do a test run of your changes.  If successful you can schedule this to be run on production.   Evaluation After your first release, evaluate the pain points of the process.  Try to find tools or modifications to existing tools to help fix them.  Don’t leave stones unturned, iteratively evolve your tools and practices to make the process as seamless as possible.  This is why I suggest open source alternatives.  Nothing is set in stone, a good example was adding transactional support to TSqlMigrations.  We ran into situations where an update would break a database, so I added a feature to do transactional updates and rollback on errors!  Another good example is generating change scripts.  We have been manually making these for months now.  I found an open source project called Open DB Diff and integrated this with TSqlMigrations.  These were things we just accepted at the time when we began adopting our tool set.  Once we became comfortable with the base functionality, it was time to start automating more of the process.  Just like anything else with development, never be afraid to try to find tools to make your job easier!   Enjoy -Wes

    Read the article

  • CodePlex Daily Summary for Sunday, March 07, 2010

    CodePlex Daily Summary for Sunday, March 07, 2010New ProjectsAlgorithminator: Universal .NET algorithm visualizer, which helps you to illustrate any algorithm, written in any .NET language. Still in development.ALToolkit: Contains a set of handy .NET components/classes. Currently it contains: * A Numeric Text Box (an Extended NumericUpDown) * A Splash Screen base fo...Automaton Home: Automaton is a home automation software built with a n-Tier, MVVM pattern utilzing WCF, EF, WPF, Silverlight and XBAP.Developer Controls: Developer Controls contains various controls to help build applications that can script/write code.Dynamic Reference Manager: Dynamic Reference Manager is a set (more like a small group) of classes and attributes written in C# that allows any .NET program to reference othe...indiologic: Utilities of an IndioNeural Cryptography in F#: This project is my magistracy resulting work. It is intended to be an example of using neural networks in cryptography. Hashing functions are chose...Particle Filter Visualization: Particle Filter Visualization Program for the Intel Science and Engineering FairPólya: Efficient, immutable, polymorphic collections. .Net lacks them, we provide them*. * By we, we mean I; and by efficient, I mean hopefully so.project euler solutions from mhinze: mhinze project euler solutionsSilverlight 4 and WCF multi layer: Silverlight 4 and WCF multi layersqwarea: Project for a browser-based, minimalistic, massively multiplayer strategy game. Part of the "Génie logiciel et Cloud Computing" course of the ENS (...SuperSocket: SuperSocket, a socket application framework can build FTP/SMTP/POP server easilyToast (for ASP.NET MVC): Dynamic, developer & designer friendly content injection, compression and optimization for ASP.NET MVCNew ReleasesALToolkit: ALToolkit 1.0: Binary release of the libraries containing: NumericTextBox SplashScreen Based on the VB.NET code, but that doesn't really matter.Blacklist of Providers: 1.0-Milestone 1: Blacklist of Providers.Milestone 1In this development release implemented - Main interface (Work Item #5453) - Database (Work Item #5523)C# Linear Hash Table: Linear Hash Table b2: Now includes a default constructor, and will throw an exception if capacity is not set to a power of 2 or loadToMaintain is below 1.Composure: CassiniDev-Trunk-40745-VS2010.rc1.NET4: A simple port of the CassiniDev portable web server project for Visual Studio 2010 RC1 built against .NET 4.0. The WCF tests currently fail unless...Developer Controls: DevControls: These are the version 1.0 releases of these controls. Download the individually or all together (in a .zip file). More releases coming soon!Dynamic Reference Manager: DRM Alpha1: This is the first release. I'm calling it Alpha because I intend implementing other functions, but I do not intend changing the way current functio...ESB Toolkit Extensions: Tellago SOA ESB Extenstions v0.3: Windows Installer file that installs Library on a BizTalk ESB 2.0 system. This Install automatically configures the esb.config to use the new compo...GKO Libraries: GKO Libraries 0.1 Alpha: 0.1 AlphaHome Access Plus+: v3.0.3.0: Version 3.0.3.0 Release Change Log: Added Announcement Box Removed script files that aren't needed Fixed & issue in directory path Stylesheet...Icarus Scene Engine: Icarus Scene Engine 1.10.306.840: Icarus Professional, Icarus Player, the supporting software for Icarus Scene Engine, with some included samples, and the start of a tutorial (with ...mavjuz WndLpt: wndlpt-0.2.5: New: Response to 5 LPT inputs "test i 1" New: Reaction to 12 LPT outputs "test q 8" New: Reaction to all LPT pins "test pin 15" New: Syntax: ...Neural Cryptography in F#: Neural Cryptography 0.0.1: The most simple version of this project. It has a neural network that works just like logical AND and a possibility to recreate neural network from...Password Provider: 1.0.3: This release fixes a bug which caused the program to crash when double clicking on a generic item.RoTwee: RoTwee 6.2.0.0: New feature is as next. 16649 Add hashtag for tweet of tune.Now you can tweet your playing tune with hashtag.Visual Studio DSite: Picture Viewer (Visual C++ 2008): This example source code allows you to view any picture you want in the click of a button. All you got to do is click the button and browser via th...WatchersNET CKEditor™ Provider for DotNetNuke: CKEditor Provider 1.8.00: Whats New File Browser: Folders & Files View reworked File Browser: Folders & Files View reworked File Browser: Folders are displayed as TreeVi...WSDLGenerator: WSDLGenerator 0.0.0.4: - replaced CommonLibrary.dll by CommandLineParser.dll - added better support for custom complex typesMost Popular ProjectsMetaSharpSilverlight ToolkitASP.NET Ajax LibraryAll-In-One Code FrameworkWindows 7 USB/DVD Download Toolニコ生アラートWindows Double ExplorerVirtual Router - Wifi Hot Spot for Windows 7 / 2008 R2Caliburn: An Application Framework for WPF and SilverlightArkSwitchMost Active ProjectsUmbraco CMSRawrSDS: Scientific DataSet library and toolsBlogEngine.NETjQuery Library for SharePoint Web Servicespatterns & practices – Enterprise LibraryIonics Isapi Rewrite FilterFarseer Physics EngineFasterflect - A Fast and Simple Reflection APIFluent Assertions

    Read the article

  • How to copy items using Nintex Workflow

    - by ybbest
    Nintex does not offer copying items from one SharePoint library to another out of box. However, it is not hard to implement one yourself. You can use the copy.asmx web services to achieve this. Here are the steps below and you can download the source here 1. Create a UDA with the following parameters: 2. Call the copy.asmx service to copy the item from SouceItemUrl to DestinationItemUrl 3. If your destination document library has versioning and check-in/out turned on , you can use list.asmx to check in your file as below: 4. You need to create constant of Credential type named SP_WORKFLOW_WS as below 5. Here is how it looks like in the Workflow designer. 6. To call this UDA, you can perform the following in your workflow

    Read the article

  • DataTables warning (table id = 'example-advanced'): Cannot reinitialise DataTable while using treetable and datatable at the same time

    - by Nyaro
    DataTables warning (table id = 'example-advanced'): Cannot reinitialise DataTable while using treetable and datatable at the same time. Here is my code: <script src="jquery-1.7.2.min.js"></script> <script src='jquery.dataTables.min.js'></script> <script src="jquery.treetable.js"></script> <script> $("#example-advanced").treetable({ expandable: true }); </script> <script> $('#example-advanced').dataTable( { "bSort": false } ); </script> Actually I wanted to get rid of the sorting part of the datatable coz it was giving error in treetable display so i want the sorting part from the datatable out and keep other functions like search and pagination. Please help me out. Thanks in advance.

    Read the article

  • FluentPath: a fluent wrapper around System.IO

    .NET is now more than eight years old, and some of its APIs got old with more grace than others. System.IO in particular has always been a little awkward. Its mostly static method calls (Path.*, Directory.*, etc.) and some stateful classes (DirectoryInfo, FileInfo). In these APIs, paths are plain strings. Since .NET v1, lots of good things happened to C#: lambda expressions, extension methods, optional parameters to name just a few. Outside of .NET, other interesting things happened as well. For...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • How to track subdomains with Google Analytics while having mod_rewrite redirect to a subdomain?

    - by Marek
    When users come directly to domain.com or www.domain.com, I am redirecting them to shop.domain.com via this .htaccess rewrite: RewriteEngine on RewriteCond %{HTTP_HOST} ^www.domain.com$ [OR] RewriteCond %{HTTP_HOST} ^domain.com$ RewriteRule ^(.*)$ http://shop.domain.com/ [R=301,L] The content served by shop.domain.com has the following tracking code parameters: var _gaq = _gaq || []; _gaq.push(['_setAccount', 'UA-123456-6']); _gaq.push(['_setDomainName', '.domain.com']); _gaq.push(['_trackPageview']); All direct visits that come to shop.domain.com as a result of the rewrite from domain.com are tracked as referral traffic, showing my own domain.com as referral source in Google Amalytics. I would like to track these visits as direct traffic. How to change the configuration to track mod_rewritten traffic on my subdomain coming from my own domain as direct traffic?

    Read the article

  • Please recommend citations for source code documentation standards

    - by Aerik
    I'm trying to convince another group in my company that they need to provide more documentation in their source code (they want to hand off the code to my group) but they're treating it as a "nice to have". In my view, it's a necessity. I've run a source code analysis tool and it's showing about 10% comment lines - but looking at the source code, most of that is coming from entire functions that the author has commented out. Can anyone provide some authoritative citations / references for documentation / comment standards for source code? (In case it matters, we're a C# house, with a little Matlab thrown in).

    Read the article

  • What features would you like to have in PHP?

    - by StasM
    Since it's the holiday season now and everybody's making wishes, I wonder - which language features you would wish PHP would have added? I am interested in some practical suggestions/wishes for the language. By practical I mean: Something that can be practically done (not: "I wish PHP would guess what my code means and fix bugs for me" or "I wish any code would execute under 5ms") Something that doesn't require changing PHP into another language (not: "I wish they'd drop $ signs and use space instead of braces" or "I wish PHP were compiled, statically typed and had # in it's name") Something that would not require breaking all the existing code (not: "Let's rename 500 functions and change parameter order for them") Something that does change the language or some interesting aspect of it (not: "I wish there was extension to support for XYZ protocol" or "I wish bug #12345 were finally fixed") Something that is more than a rant (not: "I wish PHP wouldn't suck so badly") Anybody has any good wishes? Mod edit: Stanislav Malyshev is a core PHP developer.

    Read the article

  • Good ol fashioned debugging

    - by Tim Dexter
    I have been helping out one of our new customers over the last day or two and I have even managed to get to the bottom of their problem FTW! They use BIEE and BIP and wanted to mount a BIP report in a dashboard page, so far so good, BIP does that! Just follow the instructions in the BIEE user guide. The wrinkle is that they want to enter some fixed instruction strings into the dashboard prompts to help the user. These are added as fixed values to the prompt as the default values so they appear first. Once the user makes a selection, the default strings disappear. Its a fair requirement but the BIP report chokes Now, the BIP report had been setup with the Autorun checkbox, unchecked. I expected the BIP report to wait for the Go button to be hit but it was trying to run immediately and failing. That was the first issue. You can not stop the BIP report from trying to run in a dashboard. Even if the Autorun is turned off, it seems that dashboard still makes the request to BIP to run the report. Rather than BIP refusing because its waiting for input it goes ahead anyway, I guess the mechanism does not check the autorun flag when the request is coming from the dashboard. It appears that between BIEE and BIP, they collectively ignore the autorun flag. A bug? might be, at least an enhancement request. With that in mind, how could we get BIP to not at least not fail? This fact was stumping me on the parameter error, if the autorun flag was being respected then why was BIP complaining about the parameter values it should not even be doing anything until the Go button is clicked. I now knew that the autorun flag was being ignored, it was a simple case of putting BIP into debug mode. I use the OC4J server on my laptop so debug msgs are routed through the dos box used to start the OC4J container. When I changed a value on the dashboard prompt I spotted some debug text rushing by that subsequently disappeared from the log once the operation was complete. Another bug? I needed to catch that text as it went by, using the print screen function with some software to grab multiple screens as the log appeared and then disappeared. The upshot is that when you change the dashboard prompt value, BIP validates the value against its own LOVs, if its not in the list then it throws the error. Because 'Fill this first' and 'Fill this second' ie fixed strings from the dashboard prompts, are not in the LOV lists and because the report is auto running as soon as the dashboard page is brought up, the report complains about invalid parameters. To get around this, I needed to get the strings into the LOVs. Easily done with a UNION clause: select 'Fill this first' from SH.Products Products UNION select Products."Prod Category" as "Prod Category" from SH.Products Products Now when BIP wants to validate the prompt value, the LOV query fires and finds the fixed string -> No Error. No data, but definitely no errors :0) If users do run with the fixed values, you can capture that in the template. If there is no data in the report, either the fixed values were used or the parameters selected resulted in no rows. You can capture this in the template and display something like. 'Either your parameter values resulted in no data or you have not changed the default values' Thats the upside, the downside is that if your users run the report in the BP UI they re going to see the fixed strings. You could alleviate that by having BIP display the fixed strings in top of its parameter drop boxes (just set them as the default value for the parameter.) But they will not disappear like they do in the dashboard prompts, see below. If the expected autorun behaviour worked ie wait for the Go button, then we would not have to workaround it but for now, its a pretty good solution. It was an enjoyable hour or so for me, took me back to my developer daze, when we used to race each other for the most number of bug fixes. I used to run a distant 2nd behind 'Bugmeister Chen Hu' but led the chasing pack by a reasonable distance.

    Read the article

  • Posting from ASP.NET WebForms page to another URL

    - by hajan
    Few days ago I had a case when I needed to make FORM POST from my ASP.NET WebForms page to an external site URL. More specifically, I was working on implementing Simple Payment System (like Amazon, PayPal, MoneyBookers). The operator asks to make FORM POST request to a given URL in their website, sending parameters together with the post which are computed on my application level (access keys, secret keys, signature, return-URL… etc). So, since we are not allowed nesting another form inside the <form runat=”server”> … </form>, which is required because other controls in my ASPX code work on server-side, I thought to inject the HTML and create FORM with method=”POST”. After making some proof of concept and testing some scenarios, I’ve concluded that I can do this very fast in two ways: Using jQuery to create form on fly with the needed parameters and make submit() Using HttpContext.Current.Response.Write to write the form on server-side (code-behind) and embed JavaScript code that will do the post Both ways seemed fine. 1. Using jQuery to create FORM html code and Submit it. Let’s say we have ‘PAY NOW’ button in our ASPX code: <asp:Button ID="btnPayNow" runat="server" Text="Pay Now" /> Now, if we want to make this button submit a FORM using POST method to another website, the jQuery way should be as follows: <script src="http://ajax.aspnetcdn.com/ajax/jquery/jquery-1.5.1.js" type="text/javascript"></script> <script type="text/javascript">     $(function () {         $("#btnPayNow").click(function (event) {             event.preventDefault();             //construct htmlForm string             var htmlForm = "<form id='myform' method='POST' action='http://www.microsoft.com'>" +                 "<input type='hidden' id='name' value='hajan' />" +             "</form>";             //Submit the form             $(htmlForm).appendTo("body").submit();         });     }); </script> Yes, as you see, the code fires on btnPayNow click. It removes the default button behavior, then creates htmlForm string. After that using jQuery we append the form to the body and submit it. Inside the form, you can see I have set the htttp://www.microsoft.com URL, so after clicking the button you should be automatically redirected to the Microsoft website (just for test, of course for Payment I’m using Operator's URL). 2. Using HttpContext.Current.Response.Write to write the form on server-side (code-behind) and embed JavaScript code that will do the post The C# code behind should be something like this: public void btnPayNow_Click(object sender, EventArgs e) {     string Url = "http://www.microsoft.com";     string formId = "myForm1";     StringBuilder htmlForm = new StringBuilder();     htmlForm.AppendLine("<html>");     htmlForm.AppendLine(String.Format("<body onload='document.forms[\"{0}\"].submit()'>",formId));     htmlForm.AppendLine(String.Format("<form id='{0}' method='POST' action='{1}'>", formId, Url));     htmlForm.AppendLine("<input type='hidden' id='name' value='hajan' />");     htmlForm.AppendLine("</form>");     htmlForm.AppendLine("</body>");     htmlForm.AppendLine("</html>");     HttpContext.Current.Response.Clear();     HttpContext.Current.Response.Write(htmlForm.ToString());     HttpContext.Current.Response.End();             } So, with this code we create htmlForm string using StringBuilder class and then just write the html to the page using HttpContext.Current.Response.Write. The interesting part here is that we submit the form using JavaScript code: document.forms["myForm1"].submit() This code runs on body load event, which means once the body is loaded the form is automatically submitted. Note: In order to test both solutions, create two applications on your web server and post the form from first to the second website, then get the values in the second website using Request.Form[“input-field-id”] I hope this was useful post for you. Regards, Hajan

    Read the article

  • Can’t use Sub Reports and Shared Datasets in BIDS - Data Retrieval failed for the subreport '###', located at ### Please check the log files

    - by simonsabin
    SQL Server 2008 R2 brings us a great new feature of shared data sets. This allows datasets used by parameters and multiple reports to be defined in one place and used in multiple reports. I’ve been developing a report for SQLBits that required use of sub reports. They all use a shared data set for the agenda items. Initially I was developing in report builder but have now gone back to BIDS. In doing this however all the reports bust reporting the following error Data Retrieval failed for the subreport...(read more)

    Read the article

  • RIDC Accelerator for Portal

    - by Stefan Krantz
    What is RIDC?Remote IntraDoc Client is a Java enabled API that leverages simple transportation protocols like Socket, HTTP and JAX/WS to execute content service operations in WebCenter Content Server. Each operation by design in the Content Server will execute stateless and return a complete result of the request. Each request object simply specifies the in a Map format (key and value pairs) what service to call and what parameters settings to apply. The result responded with will be built on the same Map format (key and value pairs). The possibilities with RIDC is endless since you can consume any available service (even custom made ones), RIDC can be executed from any Java SE application that has any WebCenter Content Services needs. WebCenter Portal and the example Accelerator RIDC adapter frameworkWebCenter Portal currently integrates and leverages WebCenter Content Services to enable available use cases in the portal today, like Content Presenter and Doc Lib. However the current use cases only covers few of the scenarios that the Content Server has to offer, in addition to the existing use cases it is not rare that the customer requirements requires additional steps and functionality that is provided by WebCenter Content but not part of the use cases from the WebCenter Portal.The good news to this is RIDC, the second good news is that WebCenter Portal already leverages the RIDC and has a connection management framework in place. The million dollar question here is how can I leverage this infrastructure for my custom use cases. Oracle A-Team has during its interactions produced a accelerator adapter framework that will reuse and leverage the existing connections provisioned in the webcenter portal application (works for WebCenter Spaces as well), as well as a very comprehensive design patter to minimize the work involved when exposing functionality. Let me introduce the RIDCCommon framework for accelerating WebCenter Content consumption from WebCenter Portal including Spaces. How do I get started?Through a few easy steps you will be on your way, Extract the zip file RIDCCommon.zip to the WebCenter Portal Application file structure (PortalApp) Open you Portal Application in JDeveloper (PS4/PS5) select to open the project in your application - this will add the project as a member of the application Update the Portal project dependencies to include the new RIDCCommon project Make sure that you WebCenter Content Server connection is marked as primary (a checkbox at the top of the connection properties form) You should by this stage have a similar structure in your JDeveloper Application Project Portal Project PortalWebAssets Project RIDCCommon Since the API is coming with some example operations that has already been exposed as DataControl actions, if you open Data Controls accordion you should see following: How do I implement my own operation? Create a new Java Class in for example com.oracle.ateam.portal.ridc.operation call it (GetDocInfoOperation) Extend the abstract class com.oracle.ateam.portal.ridc.operation.RIDCAbstractOperation and implement the interface com.oracle.ateam.portal.ridc.operation.IRIDCOperation The only method you actually are required to implement is execute(RIDCManager, IdcClient, IdcContext) The best practice to set object references for the operation is through the Constructor, example below public GetDocInfoOperation(String dDocName)By leveraging the constructor you can easily force the implementing class to pass right information, you can also overload the Constructor with more or less parameters as required Implement the execute method, the work you supposed to execute here is creating a new request binder and retrieve a response binder with the information in the request binder.In this case the dDocName for which we want the DocInfo Secondly you have to process the response binder by extracting the information you need from the request and restore this information in a simple POJO Java BeanIn the example below we do this in private void processResult(DataBinder responseData) - the new SearchDataObject is a Member of the GetDocInfoOperation so we can return this from a access method. Since the RIDCCommon API leverage template pattern for the operations you are now required to add a method that will enable access to the result after the execution of the operationIn the example below we added the method public SearchDataObject getDataObject() - this method returns the pre processed SearchDataObject from the execute method  This is it, as you can see on the code below you do not need more than 32 lines of very simple code 1: public class GetDocInfoOperation extends RIDCAbstractOperation implements IRIDCOperation { 2: private static final String DOC_INFO_BY_NAME = "DOC_INFO_BY_NAME"; 3: private String dDocName = null; 4: private SearchDataObject sdo = null; 5: 6: public GetDocInfoOperation(String dDocName) { 7: super(); 8: this.dDocName = dDocName; 9: } 10:   11: public boolean execute(RIDCManager manager, IdcClient client, 12: IdcContext userContext) throws Exception { 13: DataBinder dataBinder = createNewRequestBinder(DOC_INFO_BY_NAME); 14: dataBinder.putLocal(DocumentAttributeDef.NAME.getName(), dDocName); 15: 16: DataBinder responseData = getResponseBinder(dataBinder); 17: processResult(responseData); 18: return true; 19: } 20: 21: private void processResult(DataBinder responseData) { 22: DataResultSet rs = responseData.getResultSet("DOC_INFO"); 23: for(DataObject dobj : rs.getRows()) { 24: this.sdo = new SearchDataObject(dobj); 25: } 26: super.setMessage(responseData.getLocal(ATTR_MESSAGE)); 27: } 28: 29: public SearchDataObject getDataObject() { 30: return this.sdo; 31: } 32: } How do I execute my operation? In the previous section we described how to create a operation, so by now you should be ready to execute the operation Step one either add a method to the class  com.oracle.ateam.portal.datacontrol.ContentServicesDC or a class of your own choiceRemember the RIDCManager is a very light object and can be created where needed Create a method signature look like this public SearchDataObject getDocInfo(String dDocName) throws Exception In the method body - create a new instance of GetDocInfoOperation and meet the constructor requirements by passing the dDocNameGetDocInfoOperation docInfo = new GetDocInfoOperation(dDocName) Execute the operation via the RIDCManager instance rMgr.executeOperation(docInfo) Return the result by accessing it from the executed operationreturn docInfo.getDataObject() 1: private RIDCManager rMgr = null; 2: private String lastOperationMessage = null; 3:   4: public ContentServicesDC() { 5: super(); 6: this.rMgr = new RIDCManager(); 7: } 8: .... 9: public SearchDataObject getDocInfo(String dDocName) throws Exception { 10: GetDocInfoOperation docInfo = new GetDocInfoOperation(dDocName); 11: boolean boolVal = rMgr.executeOperation(docInfo); 12: lastOperationMessage = docInfo.getMessage(); 13: return docInfo.getDataObject(); 14: }   Get the binaries! The enclosed code in a example that can be used as a reference on how to consume and leverage similar use cases, user has to guarantee appropriate quality and support.  Download link: https://blogs.oracle.com/ATEAM_WEBCENTER/resource/stefan.krantz/RIDCCommon.zip RIDC API Referencehttp://docs.oracle.com/cd/E23943_01/apirefs.1111/e17274/toc.htm

    Read the article

  • Alert visualization recipe: Get out your blender, drop in some sp_send_dbmail, Google Charts API, add your favorite colors and sprinkle with html. Blend till it’s smooth and looks pretty enough to taste.

    - by Maria Zakourdaev
      I really like database monitoring. My email inbox have a constant flow of different types of alerts coming from our production servers with all kinds of information, sometimes more useful and sometimes less useful. Usually database alerts look really simple, it’s usually a plain text email saying “Prod1 Database data file on Server X is 80% used. You’d better grow it manually before some query triggers the AutoGrowth process”. Imagine you could have received email like the one below.  In addition to the alert description it could have also included the the database file growth chart over the past 6 months. Wouldn’t it give you much more information whether the data growth is natural or extreme? That’s truly what data visualization is for. Believe it or not, I have sent the graph below from SQL Server stored procedure without buying any additional data monitoring/visualization tool.   Would you like to visualize your database alerts like I do? Then like myself, you’d love the Google Charts. All you need to know is a little HTML and have a mail profile configured on your SQL Server instance regardless of the SQL Server version. First of all, I hope you know that the sp_send_dbmail procedure has a great parameter @body_format = ‘HTML’, which allows us to send rich and colorful messages instead of boring black and white ones. All that we need is to dynamically create HTML code. This is how, for instance, you can create a table and populate it with some data: DECLARE @html varchar(max) SET @html = '<html>' + '<H3><font id="Text" style='color: Green;'>Top Databases: </H3>' + '<table border="1" bordercolor="#3300FF" style='background-color:#DDF8CC' width='70%' cellpadding='3' cellspacing='3'>' + '<tr><font color="Green"><th>Database Name</th><th>Size</th><th>Physical Name</th></tr>' + CAST( (SELECT TOP 10                             td = name,'',                             td = size * 8/1024 ,'',                             td = physical_name              FROM sys.master_files               ORDER BY size DESC             FOR XML PATH ('tr'),TYPE ) AS VARCHAR(MAX)) + '</table>' EXEC msdb.dbo.sp_send_dbmail @recipients = '[email protected]', @subject ='Top databases', @body = @html, @body_format = 'HTML' This is the result:   If you want to add more visualization effects, you can use Google Charts Tools https://google-developers.appspot.com/chart/interactive/docs/index which is a free and rich library of data visualization charts, they’re also easy to populate and embed. There are two versions of the Google Charts Image based charts: https://google-developers.appspot.com/chart/image/docs/gallery/chart_gall This is an old version, it’s officially deprecated although it will be up for a next few years or so. I really enjoy using this one because it can be viewed within the email body. For mobile devices you need to change the “Load remote images” property in your email application configuration.           Charts based on JavaScript classes: https://google-developers.appspot.com/chart/interactive/docs/gallery This API is newer, with rich and highly interactive charts, and it’s much more easier to understand and configure. The only downside of it is that they cannot be viewed within the email body. Outlook, Gmail and many other email clients, as part of their security policy, do not run any JavaScript that’s placed within the email body. However, you can still enjoy this API by sending the report as an email attachment. Here is an example of the old version of Google Charts API, sending the same top databases report as in the previous example but instead of a simple table, this script is using a pie chart right from  the T-SQL code DECLARE @html  varchar(8000) DECLARE @Series  varchar(800),@Labels  varchar(8000),@Legend  varchar(8000);     SET @Series = ''; SET @Labels = ''; SET @Legend = ''; SELECT TOP 5 @Series = @Series + CAST(size * 8/1024 as varchar) + ',',                         @Labels = @Labels +CAST(size * 8/1024 as varchar) + 'MB'+'|',                         @Legend = @Legend + name + '|' FROM sys.master_files ORDER BY size DESC SELECT @Series = SUBSTRING(@Series,1,LEN(@Series)-1),         @Labels = SUBSTRING(@Labels,1,LEN(@Labels)-1),         @Legend = SUBSTRING(@Legend,1,LEN(@Legend)-1) SET @html =   '<H3><font color="Green"> '+@@ServerName+' top 5 databases : </H3>'+    '<br>'+    '<img src="http://chart.apis.google.com/chart?'+    'chf=bg,s,DDF8CC&'+    'cht=p&'+    'chs=400x200&'+    'chco=3072F3|7777CC|FF9900|FF0000|4A8C26&'+    'chd=t:'+@Series+'&'+    'chl='+@Labels+'&'+    'chma=0,0,0,0&'+    'chdl='+@Legend+'&'+    'chdlp=b"'+    'alt="'+@@ServerName+' top 5 databases" />'              EXEC msdb.dbo.sp_send_dbmail @recipients = '[email protected]',                             @subject = 'Top databases',                             @body = @html,                             @body_format = 'HTML' This is what you get. Isn’t it great? Chart parameters reference: chf     Gradient fill  bg - backgroud ; s- solid cht     chart type  ( p - pie) chs        chart size width/height chco    series colors chd        chart data string        1,2,3,2 chl        pir chart labels        a|b|c|d chma    chart margins chdl    chart legend            a|b|c|d chdlp    chart legend text        b - bottom of chart   Line graph implementation is also really easy and powerful DECLARE @html varchar(max) DECLARE @Series varchar(max) DECLARE @HourList varchar(max) SET @Series = ''; SET @HourList = ''; SELECT @HourList = @HourList + SUBSTRING(CONVERT(varchar(13),last_execution_time,121), 12,2)  + '|' ,              @Series = @Series + CAST( COUNT(1) as varchar) + ',' FROM sys.dm_exec_query_stats s     CROSS APPLY sys.dm_exec_sql_text(plan_handle) t WHERE last_execution_time > = getdate()-1 GROUP BY CONVERT(varchar(13),last_execution_time,121) ORDER BY CONVERT(varchar(13),last_execution_time,121) SET @Series = SUBSTRING(@Series,1,LEN(@Series)-1) SET @html = '<img src="http://chart.apis.google.com/chart?'+ 'chco=CA3D05,87CEEB&'+ 'chd=t:'+@Series+'&'+ 'chds=1,350&'+ 'chdl= Proc executions from cache&'+ 'chf=bg,s,1F1D1D|c,lg,0,363433,1.0,2E2B2A,0.0&'+ 'chg=25.0,25.0,3,2&'+ 'chls=3|3&'+ 'chm=d,CA3D05,0,-1,12,0|d,FFFFFF,0,-1,8,0|d,87CEEB,1,-1,12,0|d,FFFFFF,1,-1,8,0&'+ 'chs=600x450&'+ 'cht=lc&'+ 'chts=FFFFFF,14&'+ 'chtt=Executions for from' +(SELECT CONVERT(varchar(16),min(last_execution_time),121)          FROM sys.dm_exec_query_stats          WHERE last_execution_time > = getdate()-1) +' till '+ +(SELECT CONVERT(varchar(16),max(last_execution_time),121)     FROM sys.dm_exec_query_stats) + '&'+ 'chxp=1,50.0|4,50.0&'+ 'chxs=0,FFFFFF,12,0|1,FFFFFF,12,0|2,FFFFFF,12,0|3,FFFFFF,12,0|4,FFFFFF,14,0&'+ 'chxt=y,y,x,x,x&'+ 'chxl=0:|1|350|1:|N|2:|'+@HourList+'3:|Hour&'+ 'chma=55,120,0,0" alt="" />' EXEC msdb.dbo.sp_send_dbmail @recipients = '[email protected]', @subject ='Daily number of executions', @body = @html, @body_format = 'HTML' Chart parameters reference: chco    series colors chd        series data chds    scale format chdl    chart legend chf        background fills chg        grid line chls    line style chm        line fill chs        chart size cht        chart type chts    chart style chtt    chart title chxp    axis label positions chxs    axis label styles chxt    axis tick mark styles chxl    axis labels chma    chart margins If you don’t mind to get your charts as an email attachment, you can enjoy the Java based Google Charts which are even easier to configure, and have much more advanced graphics. In the example below, the sp_send_email procedure uses the parameter @query which will be executed at the time that sp_send_dbemail is executed and the HTML result of this execution will be attached to the email. DECLARE @html varchar(max),@query varchar(max) DECLARE @SeriesDBusers  varchar(800);     SET @SeriesDBusers = ''; SELECT @SeriesDBusers = @SeriesDBusers +  ' ["'+DB_NAME(r.database_id) +'", ' +cast(count(1) as varchar)+'],' FROM sys.dm_exec_requests r GROUP BY DB_NAME(database_id) ORDER BY count(1) desc; SET @SeriesDBusers = SUBSTRING(@SeriesDBusers,1,LEN(@SeriesDBusers)-1) SET @query = ' PRINT '' <html>   <head>     <script type="text/javascript" src="https://www.google.com/jsapi"></script>     <script type="text/javascript">       google.load("visualization", "1", {packages:["corechart"]});        google.setOnLoadCallback(drawChart);       function drawChart() {                      var data = google.visualization.arrayToDataTable([                        ["Database Name", "Active users"],                        '+@SeriesDBusers+'                      ]);                        var options = {                        title: "Active users",                        pieSliceText: "value"                      };                        var chart = new google.visualization.PieChart(document.getElementById("chart_div"));                      chart.draw(data, options);       };     </script>   </head>   <body>     <table>     <tr><td>         <div id="chart_div" style='width: 800px; height: 300px;'></div>         </td></tr>     </table>   </body> </html> ''' EXEC msdb.dbo.sp_send_dbmail    @recipients = '[email protected]',    @subject ='Active users',    @body = @html,    @body_format = 'HTML',    @query = @Query,     @attach_query_result_as_file = 1,     @query_attachment_filename = 'Results.htm' After opening the email attachment in the browser you are getting this kind of report: In fact, the above is not only for database alerts. It can be used for applicative reports if you need high levels of customization that you cannot achieve using standard methods like SSRS. If you need more information on how to customize the charts, you can try the following: Image Based Charts wizard https://google-developers.appspot.com/chart/image/docs/chart_wizard  Live Image Charts Playground https://google-developers.appspot.com/chart/image/docs/chart_playground Image Based Charts Parameters List https://google-developers.appspot.com/chart/image/docs/chart_params Java Script Charts Playground https://code.google.com/apis/ajax/playground/?type=visualization Use the above examples as a starting point for your procedures and I’d be more than happy to hear of your implementations of the above techniques. Yours, Maria

    Read the article

  • Alien deletes .deb when converting from .rpm

    - by Andre
    I'm trying to convert .rpm to .deb using alien. sudo alien -k libtetra-1.0.0-2.i386.rpm Alien says that: libtetra-1.0.0-2.i386.deb generated But when I check the folder - there is just original .rpm and no .deb. Also - I can see that for a split second there is a .deb file in a folder. so it looks like alien create .deb and deletes it right away. I suspect that it's maybe because I run 64 bit os and package is 32? Can somebody explain why alien deletes .deb automatically? Verbose output: LANG=C rpm -qp --queryformat %{NAME} libtetra-1.0.0-2.i386.rpm LANG=C rpm -qp --queryformat %{VERSION} libtetra-1.0.0-2.i386.rpm LANG=C rpm -qp --queryformat %{RELEASE} libtetra-1.0.0-2.i386.rpm LANG=C rpm -qp --queryformat %{ARCH} libtetra-1.0.0-2.i386.rpm LANG=C rpm -qp --queryformat %{CHANGELOGTEXT} libtetra-1.0.0-2.i386.rpm LANG=C rpm -qp --queryformat %{SUMMARY} libtetra-1.0.0-2.i386.rpm LANG=C rpm -qp --queryformat %{DESCRIPTION} libtetra-1.0.0-2.i386.rpm LANG=C rpm -qp --queryformat %{PREFIXES} libtetra-1.0.0-2.i386.rpm LANG=C rpm -qp --queryformat %{POSTIN} libtetra-1.0.0-2.i386.rpm LANG=C rpm -qp --queryformat %{POSTUN} libtetra-1.0.0-2.i386.rpm LANG=C rpm -qp --queryformat %{PREUN} libtetra-1.0.0-2.i386.rpm LANG=C rpm -qp --queryformat %{LICENSE} libtetra-1.0.0-2.i386.rpm LANG=C rpm -qp --queryformat %{PREIN} libtetra-1.0.0-2.i386.rpm LANG=C rpm -qcp libtetra-1.0.0-2.i386.rpm rpm -qpi libtetra-1.0.0-2.i386.rpm LANG=C rpm -qpl libtetra-1.0.0-2.i386.rpm mkdir libtetra-1.0.0 chmod 755 libtetra-1.0.0 rpm2cpio libtetra-1.0.0-2.i386.rpm | lzma -t -q > /dev/null 2>&1 rpm2cpio libtetra-1.0.0-2.i386.rpm | (cd libtetra-1.0.0; cpio --extract --make-directories --no-absolute-filenames --preserve-modification-time) 2>&1 chmod 755 libtetra-1.0.0/./ chmod 755 libtetra-1.0.0/./usr chmod 755 libtetra-1.0.0/./usr/lib chown 0:0 libtetra-1.0.0//usr/lib/libtetra.so.1.0.0 chmod 755 libtetra-1.0.0//usr/lib/libtetra.so.1.0.0 mkdir libtetra-1.0.0/debian date -R date -R chmod 755 libtetra-1.0.0/debian/rules debian/rules binary 2>&1 libtetra_1.0.0-3_i386.deb generated find libtetra-1.0.0 -type d -exec chmod 755 {} ; rm -rf libtetra-1.0.0 Very Verbose output LANG=C rpm -qp --queryformat %{NAME} libtetra-1.0.0-2.i386.rpm libtetra LANG=C rpm -qp --queryformat %{VERSION} libtetra-1.0.0-2.i386.rpm 1.0.0 LANG=C rpm -qp --queryformat %{RELEASE} libtetra-1.0.0-2.i386.rpm 2 LANG=C rpm -qp --queryformat %{ARCH} libtetra-1.0.0-2.i386.rpm i386 LANG=C rpm -qp --queryformat %{CHANGELOGTEXT} libtetra-1.0.0-2.i386.rpm - First RPM Package LANG=C rpm -qp --queryformat %{SUMMARY} libtetra-1.0.0-2.i386.rpm Panasonic KX-MC6000 series Printer Driver for Linux. LANG=C rpm -qp --queryformat %{DESCRIPTION} libtetra-1.0.0-2.i386.rpm This software is Panasonic KX-MC6000 series Printer Driver for Linux. You can print from applications by using CUPS(Common Unix Printing System) which is the printing system for Linux. Other functions for KX-MC6000 series are not supported by this software. LANG=C rpm -qp --queryformat %{PREFIXES} libtetra-1.0.0-2.i386.rpm (none) LANG=C rpm -qp --queryformat %{POSTIN} libtetra-1.0.0-2.i386.rpm (none) LANG=C rpm -qp --queryformat %{POSTUN} libtetra-1.0.0-2.i386.rpm (none) LANG=C rpm -qp --queryformat %{PREUN} libtetra-1.0.0-2.i386.rpm (none) LANG=C rpm -qp --queryformat %{LICENSE} libtetra-1.0.0-2.i386.rpm GPL and LGPL (Version2) LANG=C rpm -qp --queryformat %{PREIN} libtetra-1.0.0-2.i386.rpm (none) LANG=C rpm -qcp libtetra-1.0.0-2.i386.rpm rpm -qpi libtetra-1.0.0-2.i386.rpm Name : libtetra Relocations: (not relocatable) Version : 1.0.0 Vendor: Panasonic Communications Co., Ltd. Release : 2 Build Date: Tue 27 Apr 2010 05:16:40 AM EDT Install Date: (not installed) Build Host: localhost.localdomain Group : System Environment/Daemons Source RPM: libtetra-1.0.0-2.src.rpm Size : 31808 License: GPL and LGPL (Version2) Signature : (none) URL : http://panasonic.net/pcc/support/fax/world.htm Summary : Panasonic KX-MC6000 series Printer Driver for Linux. Description : This software is Panasonic KX-MC6000 series Printer Driver for Linux. You can print from applications by using CUPS(Common Unix Printing System) which is the printing system for Linux. Other functions for KX-MC6000 series are not supported by this software. LANG=C rpm -qpl libtetra-1.0.0-2.i386.rpm /usr/lib/libtetra.so /usr/lib/libtetra.so.1.0.0 mkdir libtetra-1.0.0 chmod 755 libtetra-1.0.0 rpm2cpio libtetra-1.0.0-2.i386.rpm | lzma -t -q > /dev/null 2>&1 rpm2cpio libtetra-1.0.0-2.i386.rpm | (cd libtetra-1.0.0; cpio --extract --make-directories --no-absolute-filenames --preserve-modification-time) 2>&1 63 blocks chmod 755 libtetra-1.0.0/./ chmod 755 libtetra-1.0.0/./usr chmod 755 libtetra-1.0.0/./usr/lib chown 0:0 libtetra-1.0.0//usr/lib/libtetra.so.1.0.0 chmod 755 libtetra-1.0.0//usr/lib/libtetra.so.1.0.0 mkdir libtetra-1.0.0/debian date -R Mon, 07 Feb 2011 11:03:58 -0500 date -R Mon, 07 Feb 2011 11:03:58 -0500 chmod 755 libtetra-1.0.0/debian/rules debian/rules binary 2>&1 dh_testdir dh_testdir dh_testroot dh_clean -k -d dh_clean: No packages to build. dh_installdirs dh_installdocs dh_installchangelogs find . -maxdepth 1 -mindepth 1 -not -name debian -print0 | \ xargs -0 -r -i cp -a {} debian/ dh_compress dh_makeshlibs dh_installdeb dh_shlibdeps dh_gencontrol dh_md5sums dh_builddeb libtetra_1.0.0-2_i386.deb generated find libtetra-1.0.0 -type d -exec chmod 755 {} ; rm -rf libtetra-1.0.0

    Read the article

  • Configuring thouands of related products in Magento?

    - by Anonymous -
    I'm at a stage with a Magento store I'm developing where I've added all the products (all 6000 of them) and now would like to configure related products to up my conversion rate a bit. I was wondering if there was an extension anybody knew of that functions similarly to this one, with the most current version of Magento (Community Edition, 1.6.1). If not, would anyone be able to provide some pointers for writing a script that will run through each product and add 1-5 related products. I have a fairly basic idea of taking product title text and just doing a simple text similarity query between other product titles for now, just to get some related products up there, but the Magento database isn't making a terribly large amount of sense. Thanks to anyone who can shed some light on this. :)

    Read the article

  • What is the best database design and/or software to model a thesaurus?

    - by Miles O'Keefe
    I would like to design a web app that functions as a simple thesaurus : a long list of words with attributes, all of which are linked to each other. Wikipedia defines it as: In Information Science, Library Science, and Information Technology, specialized thesauri are designed for information retrieval. They are a type of controlled vocabulary, for indexing or tagging purposes. Such a thesaurus can be used as the basis of an index for online material. The Art and Architecture Thesaurus, for example, is used to index the Canadian Information retrieval thesauri are formally organized so that existing relationships between concepts are made explicit. What database software, design or model would best fit this? Are PHP and MySQL good technologies to handle it?

    Read the article

  • DAL Exception handling in a MVP application

    - by Chathuranga
    In a MVP win forms application I'm handling exceptions as follows in DAL. Since the user messaging is not a responsibility of DAL, I want to move it in to my Presentation class. Could you show me a standard way to do that? public bool InsertAccount(IBankAccount ba) { string selectStatement = @"IF NOT EXISTS (SELECT ac_no FROM BankAccount WHERE ac_no=@ac_no) BEGIN INSERT INTO BankAccount ..."; using (SqlConnection sqlConnection = new SqlConnection(db.ConnectionString)) { using (SqlCommand sqlCommand = new SqlCommand(selectStatement, sqlConnection)) { try { sqlConnection.Open(); sqlCommand.Parameters.Add("@ac_no", SqlDbType.Char).Value = ba.AccountNumber; // // sqlCommand.ExecuteNonQuery(); return true; } catch (Exception e) { MessageBox.Show(("Error: " + e.Message)); } if (sqlConnection.State == System.Data.ConnectionState.Open) sqlConnection.Close(); return false; } } }

    Read the article

  • Programmaticaly finding the Landau notation (Big O or Theta notation) of an algorithm?

    - by Julien L
    I'm used to search for the Landau (Big O, Theta...) notation of my algorithms by hand to make sure they are as optimized as they can be, but when the functions are getting really big and complex, it's taking way too much time to do it by hand. it's also prone to human errors. I spent some time on Codility (coding/algo exercises), and noticed they will give you the Landau notation for your submitted solution (both in Time and Memory usage). I was wondering how they do that... How would you do it? Is there another way besides Lexical Analysis or parsing of the code? PS: This question concerns mainly PHP and or JavaScript, but I'm opened to any language and theory.

    Read the article

< Previous Page | 233 234 235 236 237 238 239 240 241 242 243 244  | Next Page >