Search Results

Search found 3383 results on 136 pages for 'telerik reporting'.

Page 115/136 | < Previous Page | 111 112 113 114 115 116 117 118 119 120 121 122  | Next Page >

  • How to use Application Verifier to find memory leaks

    - by Patrick
    I want to find memory leaks in my application using standard utilities. Previously I used my own memory allocator, but other people (yes, you Neil) suggested to use Microsoft's Application Verifier, but I can't seem to get it to report my leaks. I have the following simple application: #include <iostream> #include <conio.h> class X { public: X::X() : m_value(123) {} private: int m_value; }; void main() { X *p1 = 0; X *p2 = 0; X *p3 = 0; p1 = new X(); p2 = new X(); p3 = new X(); delete p1; delete p3; } This test clearly contains a memory leak: p2 is new'd but not deleted. I build the executable using the following command lines: cl /c /EHsc /Zi /Od /MDd test.cpp link /debug test.obj I downloaded Application Verifier (4.0.0665) and enabled all checks. If I now run my test application I can see a log of it in Application Verifier, but I don't see the memory leak. Questions: Why doesn't Application Verifier report a leak? Or isn't Application Verifier really intended to find leaks? If it isn't which other tools are available to clearly report leaks at the end of the application (i.e. not by taking regular snapshots and comparing them since this is not possible in an application taking 1GB or more), including the call stack of the place of allocation (so not the simple leak reporting at the end of the CRT) If I don't find a decent utility, I still have to rely on my own memory manager (which does it perfectly).

    Read the article

  • How to prevent duplicate records being inserted with SqlBulkCopy when there is no primary key

    - by kscott
    I receive a daily XML file that contains thousands of records, each being a business transaction that I need to store in an internal database for use in reporting and billing. I was under the impression that each day's file contained only unique records, but have discovered that my definition of unique is not exactly the same as the provider's. The current application that imports this data is a C#.Net 3.5 console application, it does so using SqlBulkCopy into a MS SQL Server 2008 database table where the columns exactly match the structure of the XML records. Each record has just over 100 fields, and there is no natural key in the data, or rather the fields I can come up with making sense as a composite key end up also having to allow nulls. Currently the table has several indexes, but no primary key. Basically the entire row needs to be unique. If one field is different, it is valid enough to be inserted. I looked at creating an MD5 hash of the entire row, inserting that into the database and using a constraint to prevent SqlBulkCopy from inserting the row,but I don't see how to get the MD5 Hash into the BulkCopy operation and I'm not sure if the whole operation would fail and roll back if any one record failed, or if it would continue. The file contains a very large number of records, going row by row in the XML, querying the database for a record that matches all fields, and then deciding to insert is really the only way I can see being able to do this. I was just hoping not to have to rewrite the application entirely, and the bulk copy operation is so much faster. Does anyone know of a way to use SqlBulkCopy while preventing duplicate rows, without a primary key? Or any suggestion for a different way to do this?

    Read the article

  • Handlers returns 404 error on IIS7.5 integrated pipeline

    - by er-v
    <httpHandlers> <add path="ajaxpro/*.ashx" verb="POST,GET" type="AjaxPro.AjaxHandlerFactory, AjaxPro.2" /> <add path="Reserved.ReportViewerWebControl.axd" verb="*" type="Microsoft.Reporting.WebForms.HttpHandler, Microsoft.ReportViewer.WebForms, Version=9.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a" validate="false" /> <remove verb="*" path="*.asmx" /> <add verb="*" path="*.asmx" validate="false" type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> <add verb="*" path="*_AppService.axd" validate="false" type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> <add verb="GET,HEAD" path="ScriptResource.axd" validate="false" type="System.Web.Handlers.ScriptResourceHandler, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> </httpHandlers> Hello to everybody. I have the problen whith iis7.5 in integrated mode. When I use it in classic mode handlers that presented above work fine, but if I switch to the integrated pipeline - all requests that should be handled return 404 error. Why?

    Read the article

  • Handling multiple column data with Java

    - by Ender
    I am writing an application that reads in a large number of basic user details in the following format; once read in it then allows the user to search for a user's details using their email: NAME ROLE EMAIL --------------------------------------------------- Joe Bloggs Manager [email protected] John Smith Consultant [email protected] Alan Wright Tester [email protected] ... The problem I am suffering is that I need to store a large number of details of all people that have worked at the company. The file containing these details will be written on a yearly basis simply for reporting purposes, but the program will need to be able to access these details quickly. The way I aim to access these files is to have a program that asks the user for the name of the unique email of the member of staff and for the program to then return the name and the role from that line of the file. I've played around with text files, but am struggling with how I would handle multiple columns of data when it comes to searching this large file. What is the best format to store such data in? A text file? XML? The size doesn't bother me, but I'd like to be able to search it as quickly as possible. The file will need to contain a lot of entries, probably over the 10K mark over time.

    Read the article

  • Selenium screenshots using rspec

    - by Thomas Albright
    I am trying to capture screenshots on test failure using selenium-client and rspec. I run this command: $ spec my_spec.rb \ --require 'rubygems,selenium/rspec/reporting/selenium_test_report_formatter' \ --format=Selenium::RSpec::SeleniumTestReportFormatter:./report.html It creates the report correctly when everything passes, since no screenshots are required. However, when the test fails, I get this message, and the report has blank screenshots: WARNING: Could not capture HTML snapshot: execution expired WARNING: Could not capture page screenshot: execution expired WARNING: Could not capture system screenshot: execution expired Problem while capturing system stateexecution expired What is causing this 'execution expired' error? Am I missing something important in my spec? Here is the code for my_spec.rb: require 'rubygems' gem "rspec", "=1.2.8" gem "selenium-client" require "selenium/client" require "selenium/rspec/spec_helper" describe "Databases" do attr_reader :selenium_driver alias :page :selenium_driver before(:all) do @selenium_driver = Selenium::Client::Driver.new \ :host => "192.168.0.10", :port => 4444, :browser => "*firefox", :url => "http://192.168.0.11/", :timeout_in_seconds => 10 end before(:each) do @selenium_driver.start_new_browser_session end # The system capture need to happen BEFORE closing the Selenium session append_after(:each) do @selenium_driver.close_current_browser_session end it "backed up" do page.open "/SQLDBDetails.aspx page.click "btnBackup", :wait_for => :page page.text?("Pending Backup").should be_true end end

    Read the article

  • BIRT logging in the onFetch step of a dataset

    - by Mark Underwood
    Hi all, Im having trouble with some javascript in the onFetch step of a dataset in a BIRT report. I've added logging in the initialise step of the report in a few different ways. The runtime im using is Tivoli Common Reporting, and they supply a logging framework. Its initialised as so reportContext.setPersistentGlobalVariable("logfileName", "DateRangeParm.log"); setupLogging(); logInitialize(); debugLogger("Started logging in initialize step"); debugLogger("Date: " + new Date()); This works fine to log on the steps of the report(ie initialise, BeforeRender, AfterRender etc.) but I cant seem to log anything in the Dataset steps such as onFetch etc. Ive also tried importPackage(Packages.java.util.logging); var fileHandler = new FileHandler("/tmp/birt.log", true); var rootLogger = Logger.getLogger(""); rootLogger.addHandler(fileHandler); as the BIRT instructions tell me to do in the BIRTFAQ Once again this allowed me to log things in the main report (ie BeforeRender etc) but not in the dataset onFetch Step. Ive also tried putting the previous javascript into the onFetch and that didnt seem to work either. Its a bit of a mystery. Im running Ubuntu 9.04. IBM java 1.5. Eclipse 3.5.0 and BIRT 2.5.1. Any help would be great.

    Read the article

  • MySQL performance - 100Mb ethernet vs 1Gb ethernet

    - by Rob Penridge
    Hi All I've just started a new job and noticed that the analysts computers are connected to the network at 100Mbps. The ODBC queries we run against the MySQL server can easily return 500MB+ and it seems at times when the servers are under high load the DBAs kill low priority jobs as they are taking too long to run. My question is this... How much of this server time is spent executing the request, and how much time is spent returning the data to the client? Could the query speeds be improved by upgrading the network connections to 1Gbps? (Updated for the why): The database in question was built to accomodate reporting needs and contains massive amounts of data. We usually work with subsets of this data at a granular level in external applications such as SAS or Excel, hence the reason for the large amounts of data being transmitted. The queries are not poorly structured - they are very simple and the appropriate joins/indexes etc are being used. I've removed 'query' from the Title of the post as I realised this question is more to do with general MySQL performance rather than query related performance. I was kind of hoping that someone with a Gigabit connection may be able to actually quantify some results for me here by running a query that returns a decent amount of data, then they could limit their connection speed to 100Mb and rerun the same query. Hopefully this could be done in an environment where loads are reasonably stable so as not to skew the results. If ethernet speed can improve the situation I wanted some quantifiable results to help argue my case for upgrading the network connections. Thanks Rob

    Read the article

  • MySQL Connection Timeout Issue - Grails Application on Tomcat using Hibernate and ORM

    - by gav
    Hi Guys I have a small grails application running on Tomcat in Ubuntu on a VPS. I use MySql as my datastore and everything works fine unless I leave the application for more than half a day (8 hours?). I did some searching and apparently this is the default wait_timeout in mysql.cnf so after 8 hours the connection will die but Tomcat won't know so when the next user tries to view the site they will see the connection failure error. Refreshing the page will fix this but I want to get rid of the error altogether. For my version of MySql (5.0.75) I have only my.cnf and it doesn't contain such a parameter, In any case changing this parameter doesn't solve the problem. This Blog Post seems to be reporting a similar error but I still don't fully understand what I need to configure to get this fixed and also I am hoping that there is a simpler solution than another third party library. The machine I'm running on has 256MB ram and I'm trying to keep the number of programs/services running to a minimum. Is there something I can configure in Grails / Tomcat / MySql to get this to go away? Thanks in advance, Gav From my Catalina.out; 2010-04-29 21:26:25,946 [http-8080-2] ERROR util.JDBCExceptionReporter - The last packet successfully received from the server was 102,906,722 milliseconds$ 2010-04-29 21:26:25,994 [http-8080-2] ERROR errors.GrailsExceptionResolver - Broken pipe java.net.SocketException: Broken pipe at java.net.SocketOutputStream.socketWrite0(Native Method) ... 2010-04-29 21:26:26,016 [http-8080-2] ERROR util.JDBCExceptionReporter - Already closed. 2010-04-29 21:26:26,016 [http-8080-2] ERROR util.JDBCExceptionReporter - Already closed. 2010-04-29 21:26:26,017 [http-8080-2] ERROR servlet.GrailsDispatcherServlet - HandlerInterceptor.afterCompletion threw exception org.hibernate.exception.GenericJDBCException: Cannot release connection at java.lang.Thread.run(Thread.java:619) Caused by: java.sql.SQLException: Already closed. at org.apache.commons.dbcp.PoolableConnection.close(PoolableConnection.java:84) at org.apache.commons.dbcp.PoolingDataSource$PoolGuardConnectionWrapper.close(PoolingDataSource.java:181) ... 1 more

    Read the article

  • Using the Salesforce PHP API to generate a User Profile Report

    - by Phill Pafford
    Hi All, Looking to do a security audit of all user permissions. I think I can use the Salesforce PHPToolkit 11 API to generate the report but new to Salesforce and a little confused on where to start. In Salesforce Setup Under: Administration Setup -> Manage Users -> Profiles -> Profile Names If you click on each user name you can see the permissions set and the actions the user is allowed to perform. Wanted a way to generate an excel report for all users with all the permissions for that user. Example: User Name | Can view Case | Can edit case | Can delete case | etc... phill yes no no x... and so on. I see that in Salesforce I can run a high level report on the Profile but I need to drill down for each user. Has anyone every done this type of reporting before? any help on this would be great. Thanks in advacne, --Phill

    Read the article

  • MacPorts - Installing Port, Dependencies Failed

    - by Louis
    I am attempting to install xulrunner on OSX 10.6.3 using the following: sudo port install xulrunner However, I am receiving the following error: nat-10-200-136-126:phoneyc-new $ sudo port install xulrunner ---> Computing dependencies for xulrunner ---> Activating zlib @1.2.5_0 Error: The following dependencies failed to build: gconf dbus-glib glib2 zlib gtk-doc docbook-xml docbook-xml-4.1.2 xmlcatmgr docbook-xml-4.2 docbook-xml-4.3 docbook-xml-4.4 docbook-xml-4.5 docbook-xml-5.0 docbook-xsl gnome-doc-utils iso-codes libxslt libxml2 p5-xml-parser py26-libxml2 python26 bzip2 db46 gdbm openssl readline sqlite3 tk Xft2 fontconfig freetype xrender xorg-libX11 xorg-bigreqsproto xorg-inputproto xorg-kbproto xorg-libXau xorg-xproto xorg-libXdmcp xorg-util-macros xorg-xcmiscproto xorg-xextproto xorg-xf86bigfontproto xorg-xtrans xorg-renderproto tcl xorg-libXScrnSaver xorg-libXext xorg-scrnsaverproto rarian getopt intltool gnome-common p5-getopt-long p5-pathtools p5-scalar-list-utils gtk2 atk cairo libpixman libpng jasper jpeg pango shared-mime-info tiff xorg-libXcomposite xorg-compositeproto xorg-libXfixes xorg-fixesproto xorg-libXcursor xorg-libXdamage xorg-damageproto xorg-libXi xorg-libXinerama xorg-xineramaproto xorg-libXrandr xorg-randrproto orbit2 libidl policykit heimdal lcms libcanberra gstreamer bison flex gzip texinfo lzmautils libvorbis libogg libnotify nss xorg-libXt xorg-libsm xorg-libice Error: Status 1 encountered during processing. Before reporting a bug, first run the command again with the -d flag to get complete output. nat-10-200-136-126:phoneyc-new$ I am unsure how to correct this issue, so any help would be much appreciated!

    Read the article

  • Select the latest record for each category linked available on an object

    - by Simpleton
    I have a tblMachineReports with the columns: Status(varchar),LogDate(datetime),Category(varchar), and MachineID(int). I want to retrieve the latest status update from each category for every machine, so in effect getting a snapshot of the latest statuses of all the machines unique to their MachineID. The table data would look like Category - Status - MachineID - LogDate cata - status1 - 001 - date1 cata - status2 - 002 - date2 catb - status3 - 001 - date2 catc - status2 - 002 - date4 cata - status3 - 001 - date5 catc - status1 - 001 - date6 catb - status2 - 001 - date7 cata - status2 - 002 - date8 catb - status2 - 002 - date9 catc - status2 - 001 - date10 Restated, I have multiple machines reporting on multiple statuses in this tblMachineReports. All the rows are created through inserts, so their will obviously be duplicate entries for machines as new statuses come in. None of the columns can be predicted, so I can't do any ='some hard coded string' comparisons in any part of the select statement. For the sample table I provided, the desired results would look like: Category - Status - MachineID - LogDate catc - status2 - 002 - date4 cata - status3 - 001 - date5 catb - status2 - 001 - date7 cata - status2 - 002 - date8 catb - status2 - 002 - date9 catc - status2 - 001 - date10 What would the select statement look like to achieve this, getting the latest status for each category on each machine, using MS SQL Server 2008? I have tried different combinations of subqueries combined with aggregate MAX(LogDates)'s, along with joins, group bys, distincts, and what-not, but have yet to find a working solution.

    Read the article

  • Visual Studio setup problem - 'A problem has been encountered while loading the setup components. Ca

    - by kronoz
    Hi All, I've had a serious issue with my Visual Studio 2008 setup. I receive the ever-so-useful error 'A problem has been encountered while loading the setup components. Canceling setup.' whenever I try to uninstall, reinstall or repair Visual Studio 2008 (team system version). If I can't resolve this issue I have no choice but to completely wipe my computer and start again which will take all day long! I've recently received very strange errors when trying to build projects regarding components running out of memory (despite having ~2gb physical memory free at the time) which has rendered my current VS install useless. Note I installed VS2005 shell version using the vs_setup.msi file in the SQL Server folder after I had installed VS2008, in order to gain access to the SQL Server 2005 Reporting Services designer in Business Intelligence Development Studio (this is inexplicably unavailable in VS2008). Does anyone have any solutions to this problem? P.S.: I know this isn't directly related to programming, however I feel this is appropriate to SO as it is directly related to my ability to program at all! Note: A colleague found a solution to this problem, hopefully this should help others with this problem.

    Read the article

  • Exporting from SSRS 2008 ReportViewer to Excel Causes Duplicate Columns

    - by Daniel Coffman
    I have a report that groups months by quarters, so each quarter has three months and the display of the months under the quarter is toggled by the quarter header. It looks just fine in the ReportViewer, but when exporting to Excel the first month in the quarter with data is duplicated and appended to the end of the quarter group. Here is what it looks like in the ReportViewer (with Quarters 2 and 4 expanded, note May and June do not have any data and show blank columns by design): http://i.imgur.com/MykZE.png This is how it looks when exported to Excel: http://i.imgur.com/zfLuk.png The collapsed Quarter should only show the LAST month in the quarter. You can see that in the Excel export July is inserted in Q1 even though it should be hidden entirely since that quarter is collapsed, December is appended to Q2, January is inserted into Q3, and April is duplicated and appended to Q4. Exporting the any format OTHER than Excel works correctly and does not insert these columns. A similar bug for rows was filed and marked as "by design": http://connect.microsoft.com/SQLServer/feedback/details/508823/reporting-services-2008-group-by-export-to-excel-duplicate-rows-csv-ok-pdf-ok How do I stop the export to Excel feature from inserting these duplicate columns?

    Read the article

  • Hyphen vs Dash : Replace Dash with Hyphen

    - by soldieraman
    Alright so we had a problem recently In reporting services some of the String Columns were appearing as gibberish Chinese characters. On further investigation we found it is the hyphen. Well that's what we though first. On further investigation we found it a dash (or en dash) . Basically the reason this has happened is people copy pasting values into this column from word which converts hyphens into dashes automatically. But if you look at the database they both look the same. Though on the application side you can see the difference. How do I replace the dash with a normal hyphen. If you copy the value in put it in SQL server. A hyphen is gray while a dash is black but they both look exactly the same (i.e not bigger or smaller). Problem is I can't write a REPLACE script then (they are the freakin same) REPLACE ('-' with '-') is there a way special characters like the dash can be identified in SQL server? SQL Server v 2005

    Read the article

  • SMO restore of SQL database doesn't overwrite

    - by Tom H.
    I'm trying to restore a database from a backup file using SMO. If the database does not already exist then it works fine. However, if the database already exists then I get no errors, but the database is not overwritten. The "restore" process still takes just as long, so it looks like it's working and doing a restore, but in the end the database has not changed. I'm doing this in Powershell using SMO. The code is a bit long, but I've included it below. You'll notice that I do set $restore.ReplaceDatabase = $true. Also, I use a try-catch block and report on any errors (I hope), but none are returned. Any obvious mistakes? Is it possible that I'm not reporting some error and it's being hidden from me? Thanks for any help or advice that you can give! function Invoke-SqlRestore { param( [string]$backup_file_name, [string]$server_name, [string]$database_name, [switch]$norecovery=$false ) # Get a new connection to the server [Microsoft.SqlServer.Management.Smo.Server]$server = New-SMOconnection -server_name $server_name Write-Host "Starting restore to $database_name on $server_name." Try { $backup_device = New-Object("Microsoft.SqlServer.Management.Smo.BackupDeviceItem") ($backup_file_name, "File") # Get local paths to the Database and Log file locations If ($server.Settings.DefaultFile.Length -eq 0) {$database_path = $server.Information.MasterDBPath } Else { $database_path = $server.Settings.DefaultFile} If ($server.Settings.DefaultLog.Length -eq 0 ) {$database_log_path = $server.Information.MasterDBLogPath } Else { $database_log_path = $server.Settings.DefaultLog} # Load up the Restore object settings $restore = New-Object Microsoft.SqlServer.Management.Smo.Restore $restore.Action = 'Database' $restore.Database = $database_name $restore.ReplaceDatabase = $true if ($norecovery.IsPresent) { $restore.NoRecovery = $true } Else { $restore.Norecovery = $false } $restore.Devices.Add($backup_device) # Get information from the backup file $restore_details = $restore.ReadBackupHeader($server) $data_files = $restore.ReadFileList($server) # Restore all backup files ForEach ($data_row in $data_files) { $logical_name = $data_row.LogicalName $physical_name = Get-FileName -path $data_row.PhysicalName $restore_data = New-Object("Microsoft.SqlServer.Management.Smo.RelocateFile") $restore_data.LogicalFileName = $logical_name if ($data_row.Type -eq "D") { # Restore Data file $restore_data.PhysicalFileName = $database_path + "\" + $physical_name } Else { # Restore Log file $restore_data.PhysicalFileName = $database_log_path + "\" + $physical_name } [Void]$restore.RelocateFiles.Add($restore_data) } $restore.SqlRestore($server) # If there are two files, assume the next is a Log if ($restore_details.Rows.Count -gt 1) { $restore.Action = [Microsoft.SqlServer.Management.Smo.RestoreActionType]::Log $restore.FileNumber = 2 $restore.SqlRestore($server) } } Catch { $ex = $_.Exception Write-Output $ex.message $ex = $ex.InnerException while ($ex.InnerException) { Write-Output $ex.InnerException.message $ex = $ex.InnerException } Throw $ex } Finally { $server.ConnectionContext.Disconnect() } Write-Host "Restore ended without any errors." }

    Read the article

  • How to print data form C#

    - by Hybryd
    I've searched Stackoverflow and google and found many ways how I can print stuff in C#. The best way for me would be to populate blank white windows form with some label, textbox and picturebox elements and print it as a windows form. This way is very poor because it prints in 72 DPI, and is not flexible for multiple pages print. Next way that I found that would be good is using iTextSharp, but there is a problem that iTextSharp only generates PDF-s, and you have to open it in PDF viewer and print from there. I love this way of thinking where I create a paragraph, and then fill it with text and graphic, so I found this thread http://www.devarticles.com/c/a/C-Sharp/Printing-Using-C-sharp/ where it discusses how to create your own printing engine in C#, something like iTextSharp, but very lightweight... Now that I've said that, I want to know is there any ready to use printing engine that would be like iTextSharp, made for printing, not for PDF generation? What is the best way to print something, without using reporting services like CrystalReports. I think Crystal Reports wouldn't work for my case cause I don't want to print generic reports, but some text and graphics that I need to dynamicaly generate every time I need to print.

    Read the article

  • PHP session_write_close() causes empty response

    - by Xeoncross
    When using session_write_close() in a shutdown function at the end of my script - PHP just dies. There is no error logged, response headers (firebug), or data (even whitespace!) returned. I have full PHP error reporting on with STRICT enabled and PHP 5.2.1. My guess is that since session_write_close() is being called after shutdown - some fatal error is being encountered that crashes PHP before it has a chance to send the output or log anything. This only happens on the logout page where I first: ... //If there is no session to delete (not started) if ( ! session_id()) { return; } // Get the session name $name = session_name(); // Delete the session cookie (if exists) if ( ! empty($_COOKIE[$name])) { //Get the current cookie config $params = session_get_cookie_params(); // Delete the cookie from globals unset($_COOKIE[$name], $_SESSION); //Delete the cookie on the user_agent setcookie($name, '', time()-43200, $params['path'], $params['domain'], $params['secure']); } // Destroy the session session_destroy(); ... then 2) do some more stuff 3) issue a redirect and 4) finally, after the whole page is done the register_shutdown_function(); I placed earlier runs and calls session_write_close() which saves the session to the database. The end. Since this blank response only occurs on logout I'm guessing that I'm not restarting the session properly which is causing session_write_close() to die fatally at the end of the script.

    Read the article

  • SubSonic missing stored procedures in StoredProcedures.cs when generated with SubCommander sonic.exe

    - by Mark
    We have been using SubSonic to generate our DAL with a lot of success on VS2005 and SubSonic Tools 2.0.3. SubSonic Tools do not work on VS2008 (as far as we can work out) so we have tried to use SubCommander\sonic.exe and are now hitting some problems. When we regenerate the project using SubCommander\sonic.exe and try to compile we get some errors reporting missing members (which should have been automatically generated based on the stored procedures we have). On closer inspection it looks like my StoredProcedures.cs file is missing some (not all) automatically generated methods for my classes. As an example, I have 2 procs: [dbo]._ClassA_Func1 [dbo]._ClassA_Func2 Only one of these is being generated in the StoredProcedures.cs file. These methods generate fine using the SubSonic Tools plugin. We have tried now with versions 2.1 and 2.2 of SubSonic with the same issue. We are still on .NET 2.0 so cannot use SubSonic 3.0. I have checked the permissions of both procs using fn_my_permissions and they seem identical. Does anyone have any ideas on what I can check? Thanks -- Mark

    Read the article

  • Text mining on large database (data mining)

    - by yox
    Hello, I have a large database of resumes (CV), and a certain table skills grouping all users skills. inside that table there's a field skill_text that describes the skill in full text. I'm looking for an algorithm/software/method to extract significant terms/phrases from that table in order to build a new table with standarized skills.. Here are some examples skills extracted from the DB : Sectoral and competitive analysis Business Development (incl. in international settings) Specific structure and road design software - Microstation, Macao, AutoCAD (basic knowledge) Creative work (Photoshop, In-Design, Illustrator) checking and reporting back on campaign progress organising and attending events and exhibitions Development : Aptana Studio, PHP, HTML, CSS, JavaScript, SQL, AJAX Discipline: One to one marketing, E-marketing (SEO & SEA, display, emailing, affiliate program) Mix marketing, Viral Marketing, Social network marketing. The output shoud be something like : Sectoral and competitive analysis Business Development Specific structure and road design software - Macao AutoCAD Photoshop In-Design Illustrator organising events Development Aptana Studio PHP HTML CSS JavaScript SQL AJAX Mix marketing Viral Marketing Social network marketing emailing SEO One to one marketing As you see only skills remains no other representation text. I know this is possible using text mining technics but how to do it ? the database is realy large.. it's a good thing because we can calculate text frequency and decide if it's a real skill or just meaningless text... The big problem is .. how to determin that "blablabla" is a skill ? thanks

    Read the article

  • Ant target generate empty suite xml file

    - by user200317
    I am using ant for my project and I have been trying to generate JUnit report using ant target. The problem I run in to is that at the end of the execution my TESTS-TestSuites.xml is empty. But all the other individual test xml files have data. And due to this my html reports are empty, in the sense results shows "0". Here is my ant target <!-- JUnit Reporting --> <target name="test-report" depends="build-all" description="Generate Test Results as HTML"> <taskdef name="junitreport" classname="org.apache.tools.ant.taskdefs.optional.junit.XMLResultAggregator"/> <junit printsummary="on" haltonfailure="off" haltonerror="off" fork="yes"> <batchtest fork="yes" todir="${test.reports}" filtertrace="on"> <fileset dir="${build.classes}" includes="**/Test*Selenium.class"/> </batchtest> <formatter type="plain" usefile="false"/> <formatter type="xml" usefile="true"/> <classpath> <path refid="classpath"/> <path refid="application"/> </classpath> </junit> <echo message="running JUnit Report" /> <junitreport todir="${test.reports}"> <fileset dir="${test.reports}"> <include name="Test-*.xml" /> </fileset> <report format="frames" todir="${test.reports.html}" /> </junitreport> </target> This is what I get as ant print summary, [junitreport] Processing C:\YukonSelenium\reports\TESTS-TestSuites.xml to C:\DOCUME~1\user\LOCALS~1\Temp\null1848051184 [junitreport] Loading stylesheet jar:file:/C:/DevApps/apache-ant-1.7.1/lib/ant junit.jar!/org/apache/tools/ant/taskdefs/optional/junit/xsl/junit-frames.xsl [junitreport] Transform time: 859ms [junitreport] Deleting: C:\DOCUME~1\user\LOCALS~1\Temp\null1848051184 Here's how junit report looks like http://www.freeimagehosting.net/image.php?43dd69d3b8.jpg Thanks in advance,

    Read the article

  • What are good NoSQL and non-relational database solutions for audit/logging database

    - by Juha Syrjälä
    What would be suitable database for following? I am especially interested about your experiences with non-relational NoSQL systems. Are they any good for this kind of usage, which system you have used and would recommend, or should I go with normal relational database (DB2)? I need to gather audit trail/logging type information from bunch of sources to a centralized server where I could generate reports efficiently and examine what is happening in the system. Typically a audit/logging event would consist always of some mandatory fields, for example globally unique id (some how generated by program that generated this event) timestamp event type (i.e. user logged in, error happened etc) some information about source (server1, server2) Additionally the event could contain 0-N key-value pairs, where value might be up to few kilobytes of text. It must run on Linux server It should work with high amount of data (100GB for example) it should support some kind of efficient full text search It should allow concurrent reading and writing It should be flexible to add new event types and add/remove key-value pairs to new events. Flexible=no changes should be required to database schema, application generating the events can just add new event types/new fields as needed. it should be efficient to make queries against database. For reporting and exploring what happened. For example: How many events with type=X occurred in some time period. Get all events where field A has value Y. Get all events with type X and field A has value 1 and field B is not 2 and event occurred in last 24h

    Read the article

  • Unable to run WCAT against DotNetNuke with NTLM authentication

    - by David Neale
    I have a ubr file setup to stress test an internal DotNetNuke site with WCAT: transaction { id = "Intranet Home Page"; weight = 1000; cookies{clear = true;} sleep{delay = rand("1","500");} request { url = "/"; statuscode = 401; } request { url = "/"; authentication = ntlm; username = "mydomain\\accountname"; password = "password"; statuscode = 200; } close{ method = reset;} } When running this (wcat.wsf -run -clients localhost -s myserver -t test.ubr -f settings.ubr -x) I simply get lots of error 500s: 2010-03-08 10:29:31 192.168.11.239 GET / - 80 - 192.168.52.139 - 401 2 2148074254 2010-03-08 10:29:31 192.168.11.239 GET / - 80 - 192.168.52.139 - 401 1 0 2010-03-08 10:29:31 192.168.11.239 GET /Default.aspx - 80 mydomain\myaccount 192.168.52.139 - 500 0 0 DNN is reporting these errors as: AssemblyVersion: 5.2.3 PortalID: 0 PortalName: My Company UserID: -1 UserName: ActiveTabID: 39 ActiveTabName: Home RawURL: /Default.aspx AbsoluteURL: /Default.aspx AbsoluteURLReferrer: UserAgent: DefaultDataProvider: DotNetNuke.Data.SqlDataProvider, DotNetNuke.SqlDataProvider ExceptionGUID: 28d8821f-1ef2-41db-8a65-d33e97a69130 InnerException: *Unhandled Error:* FileName: FileLineNumber: 0 FileColumnNumber: 0 Method: DotNetNuke.Authentication.ActiveDirectory.HttpModules.AuthenticationModule.OnAuthenticateRequest StackTrace: Message: System.Exception: Unhandled Error: --- System.NullReferenceException: Object reference not set to an instance of an object. at DotNetNuke.Authentication.ActiveDirectory.HttpModules.AuthenticationModule.OnAuthenticateRequest(Object s, EventArgs e) at System.Web.HttpApplication.SyncEventExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() at System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) --- End of inner exception stack trace --- Source: Server Name: MYSERVER It seems to be losing the username somehow.

    Read the article

  • IE History Tracking, IFRAMES, and Cross Domain error...

    - by peiklk
    So here's the deal. We have a Flash application that is running within an HTML file. For one page we call a legacy reporting system in ASP.NET that is within an IFRAME. This page then communicates back to the Flash application using cross-domain scripting (document.domain = "domain" is set in both pages. THIS ALL WORKS. Now the kicker. Flash has history tracking enabled. This loads the history.js file that created a div tag to store page changes so the back and forward buttons work in the browser. Which works for Firefox and Chrome as they create a div tag. HOWEVER In Internet Explorer, history.js creates another IFRAME (instead of a DIV) called ie_historyFrame. When the ScriptResource.axd code attempts to access this with: var frameDoc = this._historyFrame.contentWindow.document; we get an "Access is Denied" error message. ARGH! We've tried getting a handle to this IFRAME and inserting the document.domain code. FAIL. We've tried editing the historytemplate.html file that flex also uses to include document.domain... FAIL. I've tried to edit the underlying ASP.NET page to disable history tracking in the ScriptManager control. FAIL. At my wit's end on this one. We have users who need to use IE to access this site. They are big clients who we cannot tell to just use Firefox. Any suggestions would be greatly appreciated.

    Read the article

  • Capture keystrokes (e.g., function keys) while a messagebox is up

    - by FastAl
    We have a large WinForms app, and there is a built-in bug reporting system that can be activated during testing via the F5 Key. I am capturing the F5 key with .Net's PreFilterMessage system. This works fine on the main forms, modal dialog boxes, etc. Unfortunately, the program also displays windows messageboxes when it needs to. When there is a bug with that, e.g., wrong text in the messagebox or it shouldn't be there, the messagefilter isn't executed at all when the messagebox is up! I realize I could fix it by either rewriting my own messagebox routine, or kicking off a separate thread that polls GetAsyncKeyState and calls the error reporter from there. However I was hoping for a method that was less of a hack. Here's code that manifests the problem: Public Class Form1 Implements IMessageFilter Private Sub Form1_Click(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Click MsgBox("now, a messagebox is up!") End Sub Private Sub Form1_Load(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles MyBase.Load Application.AddMessageFilter(Me) End Sub Public Function PreFilterMessage(ByRef m As System.Windows.Forms.Message) _ As Boolean Implements IMessageFilter.PreFilterMessage Const VK_F5 As Int32 = &H74 Const WM_KEYDOWN As Integer = &H100 If m.Msg = WM_KEYDOWN And m.WParam.ToInt32 = VK_F5 Then ' In reality code here takes a screenshot, saves the program state, and shows a bug report interface ' IO.File.AppendAllText("c:\bugs.txt", InputBox("Describe the bug:")) End If End Function End Class Many thanks.

    Read the article

  • Installing SQLServer 2005 on Windows 7 64bit

    - by Mostafa
    Hi , It's 3 days I'm trying to install SqlServer 2005 under Windows 7 64 bit on my machine . First let me tell you what I've done and what I've got till now . 1-I Installed Windows 7 64 Bit on my computer 2-I tried to install SQl Server 2005 "Developer Edition" 2.1 But in "System Configuration Check" Page i recieved 2 warning , One for "IIS Feature Requirement" and another for "ASP.NET Version Registration Rquired" . 2.1.1 . I installed "Internet Information Services" from "Turn Windows features on or off" section in control panel 2.1.2 I Enabled reporting service 32 bit from "Inetpub= AdminScripts = adsutil.vbs 2.2 At this stage There was no waring in System Configuration Check 3- So I installed SQl Server 2005 Developer Edition By all default settings 4- I installed Sql Server 2005 Service Pack 3 64 bit Now when when i run "Management Studio" There is no name in "Server name" section . I typed my Computer name Or "." and i got this Error : A network -related instance-specific error occurred while establishinga connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (Provider: Named Pipes Provider , error :40 - Could not open a connection to SQL Server ) ( Microsoft SQL Server , Error :2) . I googled some for this Error and some people said follow this instruction: Startsql server 2005Configuration toolsSql Server Surface Configuration AreaSurface Area Configuration for services and Connections But i got this Error : No SQl SErver 2005 Components were found on the specified computer . Either no components are installed , or you are not a administrator on this computer (SQLSAC) I'm really tired because of that , and i don't know what's wrong with this . Some more information : I have no additonal software on my computer , like Antivirus or Proxy I tried all step with "Standard Edition" either , but no difference My user is Administrator I tried more than 5 times all those steps including re-installing Windows 7 . Please help me , I'm losing all my hair

    Read the article

< Previous Page | 111 112 113 114 115 116 117 118 119 120 121 122  | Next Page >