Search Results

Search found 7097 results on 284 pages for 'calls'.

Page 205/284 | < Previous Page | 201 202 203 204 205 206 207 208 209 210 211 212  | Next Page >

  • Tips on managing dependencies for a release?

    - by Andrew Murray
    Our system comprises many .NET websites, class libraries, and a MSSQL database. We use SVN for source control and TeamCity to automatically build to a Test server. Our team is normally working on 4 or 5 projects at a time. We try to lump many changes into a largish rollout every 2-4 weeks. My problem is with keeping track of all the dependencies for a rollout. Example: Website A cannot go live until we've rolled out Branch X of Class library B, built in turn against the Trunk of Class library C, which needs Config Updates Y and Z and Database Update D, which needs Migration Script E... It gets even more complex - like making sure each developer's project is actually compatible with the others and are building against the same versions. Yes, this is a management issue as much as a technical issue. Currently our non-optimal solution is: a whiteboard listing features that haven't gone live yet relying on our memory and intuition when planning the rollout, until we're pretty sure we've thought of everything... a dry-run on our Staging environment. It's a good indication but we're often not sure if Staging is 100% in sync with Live - part of the problem I'm hoping to solve. some amount of winging it on rollout day. So far so good, minus a few close calls. But as our system grows, I'd like a more scientific release management system allowing for more flexibility, like being able to roll out a single change or bugfix on it's own, safe in the knowledge that it won't break anything else. I'm guessing the best solution involves some sort of version numbering system, and perhaps using a project management tool. We're a start-up, so we're not too hot on religiously sticking to rigid processes, but we're happy to start, providing it doesn't add more overhead than it's worth. I'd love to hear advice from other teams who have solved this problem.

    Read the article

  • How can I open VLC via browser with PHP (Mac OS X)

    - by Damiqib
    I'm trying to open VLC via browser and make it instantly play the given video file on Mac OS X. This runs on my local server and is only meant to run locally - therefore I already run apache (MAMP) with my username and with group "staff" (defined in httpd.conf). YES - I do know that VLC has http interface - however that is not what I need, so do not suggest that... My current system works without any problems when I run it via Terminal: php /var/www/Movies/index.php - This leads to VLC opening and video starts playing fullscreen like intented. Problems start when I run the same PHP-page with browser. Then the VLC-process starts, but there's no GUI for it, video file won't start playing and the VLC-process takes nearly 100% of CPU. Both; terminal and browser started VLC-processes run with the same user (mine) Both have "Parent process" bash VLC-process begun with Terminal has empty "Process group" (only process id-number) and browser started has "httpd" + (id-number) VLC-process started via browser makes 1000-times more "Mach System Calls" than it's Terminal-started counterpart. Could anyone give me any pointers on how to get this thing working? index.php # $j is a file path to the videofile and is defined before exec('/var/www/Movies/vlc.sh "' . $j . '" > /dev/null 2>&1 & echo $!;'); # If I do this in the given PHP-page it tells me that apache is running # with my username and with the group "staff" like it should be... exec('whoamI'); vlc.sh #!/bin/bash # Activate VLC in 5 seconds to make it the front-most window (sleep 5; open -a VLC) & # Open video file /Applications/VLC.app/Contents/MacOS/VLC --quiet --fullscreen "$1"

    Read the article

  • Extending EF4 SQL Generation

    - by Basiclife
    Hi, We're using EF4 in a fairly large system and occasionally run into problems due to EF4 being unable to convert certain expressions into SQL. At present, we either need to do some fancy footwork (DB/Code) or just accept the performance hit and allow the query to be executed in-memory. Needless to say neither of these is ideal and the hacks we've sometimes had to use reduce readability / maintainability. What we would ideally like is a way to extend the SQL generation capabilities of the EF4 SQL provider. Obviously there are some things like .Net method calls which will always have to be client-side but some functionality like date comparisons (eg Group by weeks in Linq to Entities ) should be do-able. I've Googled but perhaps I'm using the wrong terminology as all I get is information about the new features of EF4 SQL Generation. For such a flexible and extensible framework, I'd be surprised if this isn't possible. In my head, I'm imagining inheriting from the [SQL 2008] provider and extending it to handle additional expressions / similar in the expression tree it's given to convert to SQL. Any help/pointers appreciated. We're using VS2010 Ultimate, .Net 4 (non-client profile) and EF4. The app is in ASP.Net and is running in a 64-Bit environment in case it makes a difference.

    Read the article

  • Can I spead out a long running stored proc accross multiple CPU's?

    - by Russ
    [Also on SuperUser - http://superuser.com/questions/116600/can-i-spead-out-a-long-running-stored-proc-accross-multiple-cpus] I have a stored procedure in SQL server the gets, and decrypts a block of data. ( Credit cards in this case. ) Most of the time, the performance is tolerable, but there are a couple customers where the process is painfully slow, taking literally 1 minute to complete. ( Well, 59377ms to return from SQL Server to be exact, but it can vary by a few hundred ms based on load ) When I watch the process, I see that SQL is only using a single proc to perform the whole process, and typically only proc 0. Is there a way I can change my stored proc so that SQL can multi-thread the process? Is it even feasible to cheat and to break the calls in half, ( top 50%, bottom 50% ), and spread the load, as a gross hack? ( just spit-balling here ) My stored proc: USE [Commerce] GO /****** Object: StoredProcedure [dbo].[GetAllCreditCardsByCustomerId] Script Date: 03/05/2010 11:50:14 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO ALTER PROCEDURE [dbo].[GetAllCreditCardsByCustomerId] @companyId UNIQUEIDENTIFIER, @DecryptionKey NVARCHAR (MAX) AS SET NoCount ON DECLARE @cardId uniqueidentifier DECLARE @tmpdecryptedCardData VarChar(MAX); DECLARE @decryptedCardData VarChar(MAX); DECLARE @tmpTable as Table ( CardId uniqueidentifier, DecryptedCard NVarChar(Max) ) DECLARE creditCards CURSOR FAST_FORWARD READ_ONLY FOR Select cardId from CreditCards where companyId = @companyId and Active=1 order by addedBy desc --2 OPEN creditCards --3 FETCH creditCards INTO @cardId -- prime the cursor WHILE @@Fetch_Status = 0 BEGIN --OPEN creditCards DECLARE creditCardData CURSOR FAST_FORWARD READ_ONLY FOR select convert(nvarchar(max), DecryptByCert(Cert_Id('Oh-Nay-Nay'), EncryptedCard, @DecryptionKey)) FROM CreditCardData where cardid = @cardId order by valueOrder OPEN creditCardData FETCH creditCardData INTO @tmpdecryptedCardData -- prime the cursor WHILE @@Fetch_Status = 0 BEGIN print 'CreditCardData' print @tmpdecryptedCardData set @decryptedCardData = ISNULL(@decryptedCardData, '') + @tmpdecryptedCardData print '@decryptedCardData' print @decryptedCardData; FETCH NEXT FROM creditCardData INTO @tmpdecryptedCardData -- fetch next END CLOSE creditCardData DEALLOCATE creditCardData insert into @tmpTable (CardId, DecryptedCard) values ( @cardId, @decryptedCardData ) set @decryptedCardData = '' FETCH NEXT FROM creditCards INTO @cardId -- fetch next END select CardId, DecryptedCard FROM @tmpTable CLOSE creditCards DEALLOCATE creditCards

    Read the article

  • Can I spread out a long running stored proc accross multiple CPU's?

    - by Russ
    [Also on SuperUser - http://superuser.com/questions/116600/can-i-spead-out-a-long-running-stored-proc-accross-multiple-cpus] I have a stored procedure in SQL server the gets, and decrypts a block of data. ( Credit cards in this case. ) Most of the time, the performance is tolerable, but there are a couple customers where the process is painfully slow, taking literally 1 minute to complete. ( Well, 59377ms to return from SQL Server to be exact, but it can vary by a few hundred ms based on load ) When I watch the process, I see that SQL is only using a single proc to perform the whole process, and typically only proc 0. Is there a way I can change my stored proc so that SQL can multi-thread the process? Is it even feasible to cheat and to break the calls in half, ( top 50%, bottom 50% ), and spread the load, as a gross hack? ( just spit-balling here ) My stored proc: USE [Commerce] GO /****** Object: StoredProcedure [dbo].[GetAllCreditCardsByCustomerId] Script Date: 03/05/2010 11:50:14 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO ALTER PROCEDURE [dbo].[GetAllCreditCardsByCustomerId] @companyId UNIQUEIDENTIFIER, @DecryptionKey NVARCHAR (MAX) AS SET NoCount ON DECLARE @cardId uniqueidentifier DECLARE @tmpdecryptedCardData VarChar(MAX); DECLARE @decryptedCardData VarChar(MAX); DECLARE @tmpTable as Table ( CardId uniqueidentifier, DecryptedCard NVarChar(Max) ) DECLARE creditCards CURSOR FAST_FORWARD READ_ONLY FOR Select cardId from CreditCards where companyId = @companyId and Active=1 order by addedBy desc --2 OPEN creditCards --3 FETCH creditCards INTO @cardId -- prime the cursor WHILE @@Fetch_Status = 0 BEGIN --OPEN creditCards DECLARE creditCardData CURSOR FAST_FORWARD READ_ONLY FOR select convert(nvarchar(max), DecryptByCert(Cert_Id('Oh-Nay-Nay'), EncryptedCard, @DecryptionKey)) FROM CreditCardData where cardid = @cardId order by valueOrder OPEN creditCardData FETCH creditCardData INTO @tmpdecryptedCardData -- prime the cursor WHILE @@Fetch_Status = 0 BEGIN print 'CreditCardData' print @tmpdecryptedCardData set @decryptedCardData = ISNULL(@decryptedCardData, '') + @tmpdecryptedCardData print '@decryptedCardData' print @decryptedCardData; FETCH NEXT FROM creditCardData INTO @tmpdecryptedCardData -- fetch next END CLOSE creditCardData DEALLOCATE creditCardData insert into @tmpTable (CardId, DecryptedCard) values ( @cardId, @decryptedCardData ) set @decryptedCardData = '' FETCH NEXT FROM creditCards INTO @cardId -- fetch next END select CardId, DecryptedCard FROM @tmpTable CLOSE creditCards DEALLOCATE creditCards

    Read the article

  • Can I spread out a long running stored proc accross multiple CPU's?

    - by Russ
    [Also on SuperUser - http://superuser.com/questions/116600/can-i-spead-out-a-long-running-stored-proc-accross-multiple-cpus] I have a stored procedure in SQL server the gets, and decrypts a block of data. ( Credit cards in this case. ) Most of the time, the performance is tolerable, but there are a couple customers where the process is painfully slow, taking literally 1 minute to complete. ( Well, 59377ms to return from SQL Server to be exact, but it can vary by a few hundred ms based on load ) When I watch the process, I see that SQL is only using a single proc to perform the whole process, and typically only proc 0. Is there a way I can change my stored proc so that SQL can multi-thread the process? Is it even feasible to cheat and to break the calls in half, ( top 50%, bottom 50% ), and spread the load, as a gross hack? ( just spit-balling here ) My stored proc: USE [Commerce] GO /****** Object: StoredProcedure [dbo].[GetAllCreditCardsByCustomerId] Script Date: 03/05/2010 11:50:14 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO ALTER PROCEDURE [dbo].[GetAllCreditCardsByCustomerId] @companyId UNIQUEIDENTIFIER, @DecryptionKey NVARCHAR (MAX) AS SET NoCount ON DECLARE @cardId uniqueidentifier DECLARE @tmpdecryptedCardData VarChar(MAX); DECLARE @decryptedCardData VarChar(MAX); DECLARE @tmpTable as Table ( CardId uniqueidentifier, DecryptedCard NVarChar(Max) ) DECLARE creditCards CURSOR FAST_FORWARD READ_ONLY FOR Select cardId from CreditCards where companyId = @companyId and Active=1 order by addedBy desc --2 OPEN creditCards --3 FETCH creditCards INTO @cardId -- prime the cursor WHILE @@Fetch_Status = 0 BEGIN --OPEN creditCards DECLARE creditCardData CURSOR FAST_FORWARD READ_ONLY FOR select convert(nvarchar(max), DecryptByCert(Cert_Id('Oh-Nay-Nay'), EncryptedCard, @DecryptionKey)) FROM CreditCardData where cardid = @cardId order by valueOrder OPEN creditCardData FETCH creditCardData INTO @tmpdecryptedCardData -- prime the cursor WHILE @@Fetch_Status = 0 BEGIN print 'CreditCardData' print @tmpdecryptedCardData set @decryptedCardData = ISNULL(@decryptedCardData, '') + @tmpdecryptedCardData print '@decryptedCardData' print @decryptedCardData; FETCH NEXT FROM creditCardData INTO @tmpdecryptedCardData -- fetch next END CLOSE creditCardData DEALLOCATE creditCardData insert into @tmpTable (CardId, DecryptedCard) values ( @cardId, @decryptedCardData ) set @decryptedCardData = '' FETCH NEXT FROM creditCards INTO @cardId -- fetch next END select CardId, DecryptedCard FROM @tmpTable CLOSE creditCards DEALLOCATE creditCards

    Read the article

  • DRY Authenticated Tasks in Cocoa (with distributed objects)

    - by arbales
    I'm kind of surprise/infuriated that the only way for me to run an authenticated task, like perhaps sudo gem install shi*t, is to make a tool with pre-written code. I'm writing a MacRuby application, which doesn't seem to expose the KAuthorization* constants/methods. So.. I learned Cocoa and Objective-C. My application creates a object, serves it and calls the a tool that elevates itself and then performs a selector on a distributed object (in the tool's thread). I hoped that the distributed object's methods would evaluated inside the tool, so I could use delegation to create "privileged" tasks. If this won't work, don't try to save it, I just want a DRY/cocoa solution. AuthHelper.m //AuthorizationExecuteWithPrivileges of this. AuthResponder* my_responder = [AuthResponder sharedResponder]; // Gets the proxy object (and it's delegate) NSString *selector = [NSString stringWithUTF8String:argv[3]]; NSLog(@"Performing selector: %@", selector); setuid(0); if ([[my_responder delegate] respondsToSelector:NSSelectorFromString(selector)]){ [[my_responder delegate] performSelectorOnMainThread:NSSelectorFromString(selector) withObject:nil waitUntilDone:YES]; } RandomController.m - (void)awakeFromNib { helperToolPath = [[[NSBundle mainBundle] resourcePath] stringByAppendingString:@"/AuthHelper"]; delegatePath = [[[NSBundle mainBundle] resourcePath] stringByAppendingString:@"/ABExtensions.rb"]; AuthResponder* my_responder = [AuthResponder initAsService]; [my_responder setDelegate:self]; } -(oneway void)install_gems{ NSArray *args = [NSArray arrayWithObjects: @"gem", @"install", @"sinatra", nil]; [NSTask launchedTaskWithLaunchPath:@"/usr/bin/sudo" arguments:args]; NSLog(@"Ran AuthResponder.delegate.install_gems"); // This prints. } ... other privileges tasks. "sudo gem update --system" for one. I'm guessing the proxy object is performing the selector in it's own thread, but I want the current (privileged thread) to do it so I can use sudo. Can I force the distributed object to evaluate the selector on the tool's thread? How else can I accomplish this dryly/cocoaly?

    Read the article

  • C# .NET : Is using the .NET Image Conversion enough?

    - by contactmatt
    I've seen a lot of people try to code their own image conversion techniques. It often seems to be very complicated, and ends up using GDI+ funciton calls, and manipulating bits of the image. This has got me wondering if I am missing something in the simplicity of .NET's image conversion call when saving an image. Here's the code I have Bitmap tempBmp = new Bitmap("c:\temp\img.jpg"); Bitmap bmp = new Bitmap(tempBmp, 800, 600); bmp.Save(c:\temp\img.bmp, //extension depends on format ImageFormat.Bmp) //These are all the ImageFormats I allow conversion to within the program. Ignore the syntax for a second ;) ImageFormat.Gif) //or ImageFormat.Jpeg) //or ImageFormat.Png) //or ImageFormat.Tiff) //or ImageFormat.Wmf) //or ImageFormat.Bmp)//or ); This is all I'm doing in my image conversion. Just setting the location of where the image should be saved, and passing it an ImageFormat type. I've tested it the best I can, but I'm wondering if I am missing anything in this simple format conversion, or if this is suffice?

    Read the article

  • WMI Query Script as a Job

    - by Kenneth
    I have two scripts. One calls the other with a list of servers as parameters. The second query is designed to execute a WMI query. When I run it manually, it does this perfectly. When I try to run it as a job it hangs forever and I have to remove it. For the sake of space here is the relevant part of the calling script: ProcessServers.ps1 Start-Job -FilePath .\GetServerDetailsLight.ps1 -ArgumentList $sqlsrv,$destdb,$server,$instance GetServerDetailsLight.ps1 param($sqlsrv,$destdb,$server,$instance) $password = get-content C:\SQLPS\auth.txt | convertto-securestring $credentials = new-object -typename System.Management.Automation.PSCredential -argumentlist "DOMAIN\MYUSER",$password [System.Reflection.Assembly]::LoadWithPartialName('Microsoft.SqlServer.SMO') $box_id = 0; if ($sqlsrv.length -eq 0) { write-output "No data passed" break } function getinfo { param( [string]$svr, [string]$inst ) "Entered GetInfo with: $svr,$inst" $cs = get-wmiobject win32_operatingsystem -computername $svr -credential $credentials -authentication 6 -Verbose -Debug | select Name, Model, Manufacturer, Description, DNSHostName, Domain, DomainRole, PartOfDomain, NumberOfProcessors, SystemType, TotalPhysicalMemory, UserName, Workgroup write-output "WMI Results: $cs" } getinfo $server $instance write-output "Complete" Executed as a job it will show as 'running' forever: PS C:\sqlps> Start-Job -FilePath .\GetServerDetailsLight.ps1 -ArgumentList DBSERVER,LOGDB,SERVER01,SERVER01 Id Name State HasMoreData Location Command -- ---- ----- ----------- -------- ------- 21 Job21 Running True localhost param($sqlsrv,$destdb,... GAC Version Location --- ------- -------- True v2.0.50727 C:\WINDOWS\assembly\GAC_MSIL\Microsoft.SqlServer.Smo\10.0.0.0__89845dcd8080cc91\Microsoft.SqlServer.Smo.dll getinfo MSDCHR01 MSDCHR01 Entered GetInfo with: SERVER01,SERVER01 The last output I ever get is the 'Entered GetInfo with: SERVER01,SERVER01'. If I run it manually like so: PS C:\sqlps> .\GetServerDetailsLight.ps1 DBSERVER LOGDB SERVER01 SERVER01 The WMI query executes just as expected. I am trying to determine why this is, or at least a useful way to trap errors from within jobs. Thanks!

    Read the article

  • T4MVC not generating an action

    - by Maslow
    I suspected there was some hidden magic somewhere that stopped what looks like actual method calls all over the place in T4MVC. Then I had a view fail to compile, and the stackTrace went into my actual method. [Authorize] public string Apply(string shortName) { if (shortName.IsNullOrEmpty()) return "Failed alliance name was not transmitted"; if (Request.IsAuthenticated == false || User == null || User.Identity == null) return "Apply authentication failed"; Models.Persistence.AlliancePersistance.Apply(User.Identity.Name, shortName); return "Applied"; } So this method isn't generating in the template after all. <%=Ajax.ActionLink("Apply", "Apply", new RouteValueDictionary() { { "shortName", item.Shortname } }, new AjaxOptions() { UpdateTargetId = "masterstatus" })%> <%=Html.ActionLink("Apply",MVC.Alliance.Apply(item.Shortname),new AjaxOptions() { UpdateTargetId = "masterstatus" }) %> The second method threw an exception on compile because the method Apply in my controller has an [Authorize] attribute so that if someone that isn't logged on clicks this, they get redirected to login, then right back to this page. There they can click on apply again, this time being logged in. And yes I realize one is Ajax.ActionLink while the other is Html.ActionLink I did try them both with the T4MVC version.

    Read the article

  • How to determine the data type of a CvMat

    - by Chris
    When using the CvMat type, the type of data is crucial to keeping your program running. For example, depending on whether your data is type float or unsigned char, you would choose one of these two commands: cvmGet(mat, row, col); cvGetReal2D(mat, row, col); Is there a universal approach to this? If the wrong data type matrix is passed to these calls, they crash at runtime. This is becoming an issue, since a function I have defined is getting passed several different types of matrices. How do you determine the data type of a matrix so you can always access its data? I tried using the "type()" function as such. CvMat* tmp_ptr = cvCreateMat(t_height,t_width,CV_8U); std::cout << "type = " << tmp_ptr->type() << std::endl; This does not compile, saying "term does not evaluate to a function taking 0 arguments". If I remove the brackets after the word type, I get a type of 1111638032 EDIT minimal application that reproduces this... int main( int argc, char** argv ) { CvMat *tmp2 = cvCreateMat(10,10, CV_32FC1); std::cout << "tmp2 type = " << tmp2->type << " and CV_32FC1 = " << CV_32FC1 << " and " << (tmp2->type == CV_32FC1) << std::endl; } Output: tmp2 type = 1111638021 and CV_32FC1 = 5 and 0

    Read the article

  • SQL ORACLE - Datatable in where clause

    - by Gage
    Currently I have a sql call returning a dataset from a MSSQL database and I want to take a column from that data and return ID's based off that column from the ORACLE database. I can do this one at a time but that requires multiple calls, I am wondering if this can be done with one call. String sql=String.Format(@"Select DIST_NO FROM DISTRICT WHERE DIST_DESC = '{0}'", row.Table.Rows[0]["Op_Centre"].ToString()); Above is the string I am using to return one ID at a time. I know the {0} can be used to format your value into the string and maybe there is a way to do that with a datatable. Also to use multiple values in the where clause it would be: String sql=String.Format(@"Select DIST_NO FROM DISTRICT WHERE DIST_DESC in ('{0}')", row.Table.Rows[0] ["Op_Centre"].ToString()); Although I realize all of this can be done I am wondering if theres an easy way to add it all to the sql string in one call. As I am writing this I am realizing I could break the string into sections then just add every row value to the SQL string within the "WHERE DIST_DESC IN (" clause... I am still curious to see if there is another way though, and because someone else may come across this problem I will post a solution if I develop one. Thanks in advance.

    Read the article

  • List.Sort in C#: comparer being called with null object

    - by cbp
    I am getting strange behaviour using the built-in C# List.Sort function with a custom comparer. For some reason it sometimes calls the comparer class's Compare method with a null object as one of the parameters. But if I check the list with the debugger there are no null objects in the collection. My comparer class looks like this: public class DelegateToComparer<T> : IComparer<T> { private readonly Func<T,T,int> _comparer; public int Compare(T x, T y) { return _comparer(x, y); } public DelegateToComparer(Func<T, T, int> comparer) { _comparer = comparer; } } This allows a delegate to be passed to the List.Sort method, like this: mylist.Sort(new DelegateToComparer<MyClass>( (x, y) => { return x.SomeProp.CompareTo(y.SomeProp); }); So the above delegate will throw a null reference exception for the x parameter, even though no elements of mylist are null. UPDATE: Yes I am absolutely sure that it is parameter x throwing the null reference exception! UPDATE: Instead of using the framework's List.Sort method, I tried a custom sort method (i.e. new BubbleSort().Sort(mylist)) and the problem went away. As I suspected, the List.Sort method passes null to the comparer for some reason.

    Read the article

  • Execute code on assembly load

    - by Dmitriy Matveev
    I'm working on wrapper for some huge unmanaged library. Almost every of it's functions can call some error handler deeply inside. The default error handler writes error to console and calls abort() function. This behavior is undesirable for managed library, so I want to replace the default error handler with my own which will just throw some exception and let program continue normal execution after handling of this exception. The error handler must be changed before any of the wrapped functions will be called. The wrapper library is written in managed c++ with static linkage to wrapped library, so nothing like "a type with hundreds of dll imports" is present. I also can't find a single type which is used by everything inside wrapper library. So I can't solve that problem by defining static constructor in one single type which will execute code I need. I currently see two ways of solving that problem: Define some static method like Library.Initialize() which must be called one time by client before his code will use any part of the wrapper library. Find the most minimal subset of types which is used by every top-level function (I think the size of this subset will be something like 25-50 types) and add static constructors calling Library.Initialize (which will be internal in that scenario) to every of these types. I've read this and this questions, but they didn't helped me. Is there any proper ways of solving that problem? Maybe some nice hacks available?

    Read the article

  • Problem calling web service from within JBOSS EJB Service

    - by Rob Goodwin
    I have a simple web service sitting on our internal network. I used SOAPUI to do a bit of testing, generated the service classes from the WSDL , and wrote some java code to access the service. All went as expected as I was able to create the service proxy classes and make calls. Pretty simple stuff. The only speed bump was getting java to trust the certificate from the machine providing the web service. That was not a technical problem, but rather my lack of experience with SSL based web services. Now onto my problem. I coded up a simple EJB service and deployed it into JBoss Application Server 4.3 and now get the following error in the code that previously worked. 12:21:50,235 WARN [ServiceDelegateImpl] Cannot access wsdlURL: https://WS-Test/TestService/v2/TestService?wsdl I can access the wsdl file from a web browser running on the same machine as the application server using the URL in the error message. I am at a loss as to where to go from here. I turned on the debug logs in JBOSS and got nothing more than what I showed above. I have done some searching on the net and found the same error in some questions, but those questions had no answers. The web services classes where generated with JAX-WS 2.2 using the wsimport ant task and placed in a jar that is included in the ejb package. JBoss is deployed in RHEL 5.4, where the standalone testing was done on Windows XP.

    Read the article

  • Datatable add new column and values speed issue

    - by Cine
    I am having some speed issue with my datatables. In this particular case I am using it as holder of data, it is never used in GUI or any other scenario that actually uses any of the fancy features. In my speed trace, this particular constructor was showing up as a heavy user of time when my database is ~40k rows. The main user was set_Item of DataTable. protected myclass(DataTable dataTable, DataColumn idColumn) { this.dataTable = dataTable; IdColumn = idColumn ?? this.dataTable.Columns.Add(string.Format("SYS_{0}_SYS", Guid.NewGuid()), Type.GetType("System.Int32")); JobIdColumn = this.dataTable.Columns.Add(string.Format("SYS_{0}_SYS", Guid.NewGuid()), Type.GetType("System.Int32")); IsNewColumn = this.dataTable.Columns.Add(string.Format("SYS_{0}_SYS", Guid.NewGuid()), Type.GetType("System.Int32")); int id = 1; foreach (DataRow r in this.dataTable.Rows) { r[JobIdColumn] = id++; r[IsNewColumn] = (r[IdColumn] == null || r[IdColumn].ToString() == string.Empty) ? 1 : 0; } Digging deeper into the trace, it turns out that set_Item calls EndEdit, which brings my thoughts to the transaction support of the DataTable, for which I have no usage for in my scenario. So my solution to this was to open editing on all of the rows and never close them again. _dt.BeginLoadData(); foreach (DataRow row in _dt.Rows) row.BeginEdit(); Is there a better solution? This feels too much like a big giant hack that will eventually come and bite me. You might suggest that I dont use DataTable at all, but I have already considered that and rejected it due to the amount of effort that would be required to reimplement with a custom class. The main reason it is a datatable is that it is ancient code (.net 1.1 time) and I dont want to spend that much time changing it, and it is also because the original table comes out of a third party component.

    Read the article

  • find(:all) and then add data from another table to the object

    - by Koning Baard XIV
    I have two tables: create_table "friendships", :force => true do |t| t.integer "user1_id" t.integer "user2_id" t.boolean "hasaccepted" t.datetime "created_at" t.datetime "updated_at" end and create_table "users", :force => true do |t| t.string "email" t.string "password" t.string "phone" t.boolean "gender" t.datetime "created_at" t.datetime "updated_at" t.string "firstname" t.string "lastname" t.date "birthday" end I need to show the user a list of Friendrequests, so I use this method in my controller: def getfriendrequests respond_to do |format| case params[:id] when "to_me" @friendrequests = Friendship.find(:all, :conditions => { :user2_id => session[:user], :hasaccepted => false }) when "from_me" @friendrequests = Friendship.find(:all, :conditions => { :user1_id => session[:user], :hasaccepted => false }) end format.xml { render :xml => @friendrequests } format.json { render :json => @friendrequests } end end I do nearly everything using AJAX, so to fetch the First and Last name of the user with UID user2_id (the to_me param comes later, don't worry right now), I need a for loop which make multiple AJAX calls. This sucks and costs much bandwidth. So I'd rather like that getfriendrequests also returns the First and Last name of the corresponding users, so, e.g. the JSON response would not be: [ { "friendship": { "created_at": "2010-02-19T13:51:31Z", "user1_id": 2, "updated_at": "2010-02-19T13:51:31Z", "hasaccepted": false, "id": 11, "user2_id": 3 } }, { "friendship": { "created_at": "2010-02-19T16:31:23Z", "user1_id": 2, "updated_at": "2010-02-19T16:31:23Z", "hasaccepted": false, "id": 12, "user2_id": 4 } } ] but rather: [ { "friendship": { "created_at": "2010-02-19T13:51:31Z", "user1_id": 2, "updated_at": "2010-02-19T13:51:31Z", "hasaccepted": false, "id": 11, "user2_id": 3, "firstname": "Jon", "lastname": "Skeet" } }, { "friendship": { "created_at": "2010-02-19T16:31:23Z", "user1_id": 2, "updated_at": "2010-02-19T16:31:23Z", "hasaccepted": false, "id": 12, "user2_id": 4, "firstname": "Mark", "lastname": "Gravell" } } ] I thought of a for loop in the getfriendrequests method, but I don't know how to implement this, and maybe there is an easier way. It must also work for XML. Can anyone help me? Thanks

    Read the article

  • Google map API v3 event click raise when clickingMarkerClusterer?

    - by lucian.jp
    I have a Google Map API v3 map object on a page that uses MarkerClusterer. I have a function that need to run when we click on the map to it is registered as: google.maps.event.addListener(map, 'click', function (event) { CallMe(event.latLng); }); So my problem is as follows: When I click on a cluster of MarkerClusterer instead of behaving like a marker and not raise the click event on the map but only the one from the marker it calls the click from the map. To test this I have generated an alert from the markerclusterer click: google.maps.event.addListener(markerClusterer, "clusterclick", function (cluster) { alert('MarkerClusterer click event'); }); So the clusterclick rises after the click event of map object. I then can't remove the listener of map object as a solution. Is there any way to test if there was a clusterer click in the click event of the map? Or a way to replicate the marker behaviour and do not raise the click event of map when clustererclick is called? Google and documentation didn’t help me. Thx

    Read the article

  • 32 bit dllimport generating incorrect format error (0x8007000b) on win7 x64 platform

    - by DFP
    Hello, I'm trying to install and run a 32 bit application on a Win7 x64 machine. The application is built as a Win32 app. It runs fine on 32 bit platforms. On the x64 machine it installs correctly in the Programs(x86) directory and runs fine until I make a call into a 32 bit dll. At that time I get the incorrect format error (0x8007000b) indicating it is trying to load the dll of the wrong bitness. Indeed it is trying to load the 64 bit dll from the System32 directory rather than the 32 bit version in the SystemWOW64 directory. Another 32 bit application provided by the dll vendor runs correctly and it does load the 32 bit dll from the SystemWOW64 directory. I do not have source to their application to see how they are accessing the DLL. I'm using the DllImport function as shown below to access the dll. Is there a way to decorate the DllImport calls to force it to load the 32 bit version? Any thoughts appreciated. Thanks, DP public static class Micronas { [DllImport(@"UAC2.DLL")] public static extern short UacBuildDeviceList(uint uFlags); [DllImport(@"UAC2.DLL")] public static extern short UacGetNumberOfDevices(); [DllImport(@"UAC2.DLL")] public static extern uint UacGetFirstDevice(); [DllImport(@"UAC2.DLL")] public static extern uint UacGetNextDevice(uint handle); [DllImport(@"UAC2.DLL")] public static extern uint UacSetXDFP(uint handle, short adr, uint data); [DllImport(@"UAC2.DLL")] public unsafe static extern uint UacGetXDFP(uint handle, short adr, IntPtr data); }

    Read the article

  • Objective C, Linking Error with extern variable..

    - by LCYSoft
    I have a very simple java code like this. I don't have any idea how to do this in Objective C. Especially, the static part which calls the getLocalAddress() method and assign it into the static string variable. I know how to set a static variable and a static method in Objective but I dont know how to implement that static { } part in java. Thanks in advance... public class Address { public static String localIpAddress; static { localIpAddress = getLocalIpAddress(); } public Address() { } static String getLocalIpAddress() { //do something to get local ip address } } I added this in my .h file #import <Foundation/Foundation.h> extern NSString *localIpAddress; @class WifiAddrss; @interface Address : NSObject { } @end And my .m file looks like #import "Address.h" #import "WifiAddress.h" @implementation Address +(void)initialize{ if(self == [Address class]){ localIpAddress = [self getLocalIpAddress]; } } +(NSString *)getLocalIpAddress{ return address here } -(id)init{ self = [super init]; if (self == nil){ NSLog(@"init error"); } return self; } @end And Now I am getting a linking error and it complains about "extern NSString *localIpAddress" part. If I change the extern to static, it works fine. But what I wanted to do is that I want make the scope of "localIpAddress" variable as grobal. Since if I put "static" in front of a variable in Objective-C then the variable is only visible in the class. But this time, I want to make that as a grobal variable. So my question is how to make "localIpAddress" variable as a grobal variable which is initialized once when the first time Address class is created.. Thanks in advance...

    Read the article

  • How to configure outgoing connections from an SQL stored procedure?

    - by Peter Vestberg
    I am working on a .NET project which uses Microsoft SQL server. In this project, I need a CLR stored procedure (written in C#) that uses a remote web service. So, when the stored procedure is executed on the SQL server, it makes web service calls and thus sends packets to a remote location. The problem is that when executing the SP I get: "System.Net.WebException: The request failed with HTTP status 403: Forbidden." The database user has full permission, the deployed CLR assembly and SP are even marked "unsafe", I tried signing it etc., so any of that is not causing the problem. When I am executing the very same C# code, but from a simple console application instead of as a SP, it all works fine. So I started to suspect a network related problem and had a packet sniffer running when executing both the SP and the console app version. What I realized was that the packets sent out had different destination IP addresses: the console app sent the packets directly to the web service IP while the SP sent the packets to a proxy server we use in our company. Due to network policies the latter is not allowed and that explains the "403 Forbidden" exception. So my question boils down to this: How can I configure the SP/MS SQL server to NOT use that proxy? I want it to send the packets directly to the web service IP, just like the test console app. (again, the C# code is the same , so it's not a programming matter). I've disabled all proxy settings in Internet Explorer in case the SQL server inherits these settings or something. However, no luck. Any help would be greatly appreciated! Best regards, Peter

    Read the article

  • VB .Net - Reflection: Reflected Method from a loaded Assembly executes before calling method. Why?

    - by pu.griffin
    When I am loading an Assembly dynamically, then calling a method from it, I appear to be getting the method from Assembly executing before the code in the method that is calling it. It does not appear to be executing in a Serial manner as I would expect. Can anyone shine some light on why this might be happening. Below is some code to illustrate what I am seeing, the code from the some.dll assembly calls a method named PerformLookup. For testing I put a similar MessageBox type output with "PerformLookup Time: " as the text. What I end up seeing is: First: "PerformLookup Time: 40:842" Second: "initIndex Time: 45:873" Imports System Imports System.Data Imports System.IO Imports Microsoft.VisualBasic.Strings Imports System.Reflection Public Class Class1 Public Function initIndex(indexTable as System.Collections.Hashtable) As System.Data.DataSet Dim writeCode As String MessageBox.Show("initIndex Time: " & Date.Now.Second.ToString() & ":" & Date.Now.Millisecond.ToString()) System.Threading.Thread.Sleep(5000) writeCode = RefreshList() End Function Public Function RefreshList() As String Dim asm As System.Reflection.Assembly Dim t As Type() Dim ty As Type Dim m As MethodInfo() Dim mm As MethodInfo Dim retString as String retString = "" Try asm = System.Reflection.Assembly.LoadFrom("C:\Program Files\some.dll") t = asm.GetTypes() ty = asm.GetType(t(28).FullName) 'known class location m = ty.GetMethods() mm = ty.GetMethod("PerformLookup") Dim o as Object o = Activator.CreateInstance(ty) Dim oo as Object() retString = mm.Invoke(o,Nothing).ToString() Catch Ex As Exception End Try return retString End Function End Class

    Read the article

  • CGContext problems

    - by Peyman
    Hi I have a CALayer tree hierarchy A B C D where A is the view's root layer (and I create and add B,C and D layers. I am using the same delegate method `- (void) drawLayer:(CALayer *) theLayer inContext:(CGContextRef)context to provide content to each of these layers (implemented through a switch statement in the above method). At initiation A's content is drawn (A.contents) sequentially through these methods `- (void) drawLayer:(CALayer *) theLayer inContext:(CGContextRef)context -(void) drawCircle:(CALayer *) theLayer inContext:(CGContextRef)context where drawCircle does CGContextSaveGState(context); / draws circle and other paths here / CGContextRestoreGState(context); CGImageRef contentImage = CGBitmapContextCreateImage(context); theLayer.contents = (id) contentImage; CGImageRelease(contentImage); (i.e I save the Context, do the drawing, restore the context, make a bitmap of the context, update the layer's contents, in this case A's, then release the contentimage). When the user then clicks somewhere in the circle at touchesEnded:(NSSet*)touches withEvent:(UIEvent*)event delegate method I then try to paint the content of B by (still in touchesEnded:) CGContextRef context = UIGraphicsGetCurrentContext(); [self drawLayer:self.B inContext:context]; [self.B setNeedsDisplay]; the setNeedsDisplay call the delegate - drawLayer:(CALayer *) theLayer inContext:(CGContextRef)context again but this time the switch statement (using a layer flag) calls [self drawCircle:theLayer inContext:context]; with color red. The problem I am facing is that when -drawCircle: inContext: is called I get a long list of CGContext errors, starting with <Error>: CGContextSaveGState: invalid context and ending with CGBitmapContextCreateImage: invalid context I played around with making context the view's ivar and it worked so i am sure the context is the problem but I don't know what. I've tried CGContextFlush but it didn't help. Any help would be appreciated thank you

    Read the article

  • validation control unable to find its control to validate

    - by nat
    i have a repeater that is bound to a number of custom dataitems/types on the itemdatabound event for the repeater the code calls a renderedit function that depending on the custom datatype will render a custom control. it will also (if validation flag is set) render a validation control for the appropriate rendered edit control the edit control overrides the CreateChildControls() method for the custom control adding a number of literalControls thus protected override void CreateChildControls() { //other bits removed - but it is this 'hidden' control i am trying to validate this.Controls.Add(new LiteralControl(string.Format( "<input type=\"text\" name=\"{0}\" id=\"{0}\" value=\"{1}\" style=\"display:none;\" \">" , this.UniqueID , this.MediaId.ToString()) )); //some other bits removed } the validation control is rendered like this: where the passed in editcontrol is the control instance of which the above createchildcontrols is a method of.. public override Control RenderValidationControl(Control editControl) { Control ctrl = new PlaceHolder(); RequiredFieldValidator req = new RequiredFieldValidator(); req.ID = editControl.ClientID + "_validator"; req.ControlToValidate = editControl.UniqueID; req.Display = ValidatorDisplay.Dynamic; req.InitialValue = "0"; req.ErrorMessage = this.Caption + " cannot be blank"; ctrl.Controls.Add(req); return ctrl; } the problem is, altho the validation controls .ControlToValidate property is set to the uniqueid of the editcontrol. when i hit the page i get the following error: Unable to find control id 'FieldRepeater$ctl01$ctl00' referenced by the 'ControlToValidate' property of 'FieldRepeater_ctl01_ctl00_validator'. i have tried changing the literal in the createchildcontrols to a new TextBox(), and then set the id etc then, but i get a similar problem. can anyone enlighten me? is this because of the order the controls are rendered in? ie the validation control is written before the editcontrol? or... anyhow any help much appreciated thanks nat

    Read the article

  • Is there anyway to close a StreamWriter without closing it's BaseStream?

    - by Binary Worrier
    My root problem is that when using calls Dispose on a StreamWriter, it also disposes the BaseStream (same problem with Close). I have a workaround for this, but as you can see it involves copying the stream. Is there any way to do this without copying the stream? The purpose of this is to get the contents of a string (originally read from a database) into a stream, so the stream can be read by a third party component. NB I cannot change the third party component. public System.IO.Stream CreateStream(string value) { var baseStream = new System.IO.MemoryStream(); var baseCopy = new System.IO.MemoryStream(); using (var writer = new System.IO.StreamWriter(baseStream, System.Text.Encoding.UTF8)) { writer.Write(value); writer.Flush(); baseStream.WriteTo(baseCopy); } baseCopy.Seek(0, System.IO.SeekOrigin.Begin); return baseCopy; } Used as public void Noddy() { System.IO.Stream myStream = CreateStream("The contents of this string are unimportant"); My3rdPartyComponent.ReadFromStream(myStream); } Ideally I'm looking for an imaginery method called BreakAssociationWithBaseStream, e.g. public System.IO.Stream CreateStream_Alternate(string value) { var baseStream = new System.IO.MemoryStream(); using (var writer = new System.IO.StreamWriter(baseStream, System.Text.Encoding.UTF8)) { writer.Write(value); writer.Flush(); writer.BreakAssociationWithBaseStream(); } return baseStream; }

    Read the article

< Previous Page | 201 202 203 204 205 206 207 208 209 210 211 212  | Next Page >