Search Results

Search found 3947 results on 158 pages for 'passing'.

Page 125/158 | < Previous Page | 121 122 123 124 125 126 127 128 129 130 131 132  | Next Page >

  • Ruby on Rails: How to use a local variable in a collection_select

    - by mmacaulay
    I have a partial view which I'm passing a local variable into: <%= render :partial => "products/product_row", :locals => { :product => product } %> These are rows in a table, and I want to have a <select> in each row for product categories: <%= collection_select(:product, :category_id, @current_user.categories, :id, :name, options = {:prompt => "-- Select a category --"}, html_options = { :id => "", :class => "product_category" }) %> (Note: the id = "" is there because collection_select tries to give all these select elements the same id.) The problem is that I want to have product.category be selected by default and this doesn't work unless I have an instance variable @product. I can't do this in the controller because this is a collection of products. One way I was able to get around this was to have this line just before the collection_select: <% @product = product %> But this seems very hacky and would be a problem if I ever wanted to have an actual instance variable @product in the controller. I guess one workaround would be to name this instance variable something more specific like @product_select_tmp in hopes of not interfering with anything that might be declared in the controller. This still seems very hacky though, and I'd prefer a cleaner solution. Surely there must be a way to have collection_select use a local variable instead of an instance variable. Note that I've tried a few different ways of calling collection_select with no success: <%= collection_select(product, ... <%= collection_select('product', ... etc. Any help greatly appreciated!

    Read the article

  • Running commands over ssh with Java

    - by Ichorus
    Scenerio: I'd like to run commands on remote machines from a Java program over ssh (I am using OpenSSH on my development machine). I'd also like to make the ssh connection by passing the password rather than setting up keys as I would with 'expect'. Problem: When trying to do the 'expect' like password login the Process that is created with ProcessBuilder cannot seem to see the password prompt. When running regular non-ssh commands (e.g 'ls') I can get the streams and interact with them just fine. I am combining standard error and standard out into one stream with redirectErrorStream(true); so I am not missing it in standard error...When I run ssh with the '-v' option, I see all of the logging in the stream but I do not see the prompt. This is my first time trying to use ProcessBuilder for something like this. I know it would be easier to use Python, Perl or good ol' expect but my boss wants to utilize what we are trying to get back (remote log files and running scripts) within an existing Java program so I am kind of stuck. Thanks in advance for the help!

    Read the article

  • function objects versus function pointers

    - by kumar_m_kiran
    Hi All, I have two questions related to function objects and function pointers, Question : 1 When I read the different uses sort algorithm of STL, I see that the third parameter can be a function objects, below is an example class State { public: //... int population() const; float aveTempF() const; //... }; struct PopLess : public std::binary_function<State,State,bool> { bool operator ()( const State &a, const State &b ) const { return popLess( a, b ); } }; sort( union, union+50, PopLess() ); Question : Now, How does the statement, sort(union, union+50,PopLess()) work? PopLess() must be resolved into something like PopLess tempObject.operator() which would be same as executing the operator () function on a temporary object. I see this as, passing the return value of overloaded operation i.e bool (as in my example) to sort algorithm. So then, How does sort function resolve the third parameter in this case? Question : 2 Question Do we derive any particular advantage of using function objects versus function pointer? If we use below function pointer will it derive any disavantage? inline bool popLess( const State &a, const State &b ) { return a.population() < b.population(); } std::sort( union, union+50, popLess ); // sort by population PS : Both the above references(including example) are from book "C++ Common Knowledge: Essential Intermediate Programming" by "Stephen C. Dewhurst". I was unable to decode the topic content, thus have posted for help. Thanks in advance for your help.

    Read the article

  • Update C# Chart using BackgroundWorker

    - by Mark
    I am currently trying to update a chart which is on my form to the background worker using: bwCharter.RunWorkerAsync(chart1); Which runs: private void bcCharter_DoWork(object sender, DoWorkEventArgs e) { System.Windows.Forms.DataVisualization.Charting.Chart chart = null; // Convert e.Argument to chart //.. // Converted.. chart.Series.Clear(); e.Result=chart; setChart(c.chart); } private void setChart(System.Windows.Forms.DataVisualization.Charting.Chart arg) { if (chart1.InvokeRequired) { chart1.Invoke(new MethodInvoker(delegate { setChart(arg); })); return; } chart1 = arg; } However, at the point of clearing the series, an exception is thrown. Basically, I want to do a whole lot more processing after clearing the series, which slows the GUI down completely - so wanted this in another thread. I thought that by passing it as an argument, I should be safe, but apparently not! Interestingly, the chart is on a tab page. I can run this over and over if the tabpage is in the background, but if I run this, look at the chart, hide it again, and re-run, it throws the exception. Obviously, it throws if the chart is in the foreground as well. Can anyone suggest what I can do differently? Thanks!

    Read the article

  • Printing is not working in tomcat, when i start server with services.msc(From client side we could not print )

    - by maya
    I am using JasperReports 1.3.1 to print the report. I am sing eclipse and tomcat for development purpose. In eclipse, when i run the application, the below code will show the listed printer devices and print button. If i click the print button, the report is printing by selected device. PrintRequestAttributeSet printRequestAttributeSet = new HashPrintRequestAttributeSet(); printRequestAttributeSet.add(MediaSizeName.ISO_A5); PrintServiceAttributeSet printServiceAttributeSet = new HashPrintServiceAttributeSet(); JRPrintServiceExporter exporter = new JRPrintServiceExporter(); exporter.setParameter(JRExporterParameter.JASPER_PRINT, jasperPrint); exporter.setParameter(JRPrintServiceExporterParameter.PRINT_REQUEST_ATTRIBUTE_SET, printRequestAttributeSet); exporter.setParameter(JRPrintServiceExporterParameter.PRINT_SERVICE_ATTRIBUTE_SET, printServiceAttributeSet); exporter.setParameter(JRPrintServiceExporterParameter.DISPLAY_PAGE_DIALOG, Boolean.FALSE); exporter.setParameter(JRPrintServiceExporterParameter.DISPLAY_PRINT_DIALOG, Boolean.TRUE); exporter.exportReport(); Here I am passing jasperPrint as a parameter which i manually construted.Its working good My problem is: I created war file and pasted in tomcat Apache Software Foundation\Tomcat 6.0\webapps directory and started the tomcat by using services.msc. At this point, its not displaying the listed printer details and also not printing. I put some logger, I found that, the code is hanging with exporter.exportReport(); after this line code is not executing . Please suggest me for how to print from client side using jasper

    Read the article

  • Trying to insert a row using stored procedured with a parameter binded to an expression.

    - by Arvind Singh
    Environment: asp.net 3.5 (C# and VB) , Ms-sql server 2005 express Tables Table:tableUser ID (primary key) username Table:userSchedule ID (primary key) thecreator (foreign key = tableUser.ID) other fields I have created a procedure that accepts a parameter username and gets the userid and inserts a row in Table:userSchedule Problem: Using stored procedure with datalist control to only fetch data from the database by passing the current username using statement below works fine protected void SqlDataSourceGetUserID_Selecting(object sender, SqlDataSourceSelectingEventArgs e) { e.Command.Parameters["@CurrentUserName"].Value = Context.User.Identity.Name; } But while inserting using DetailsView it shows error Procedure or function OASNewSchedule has too many arguments specified. I did use protected void SqlDataSourceCreateNewSchedule_Selecting(object sender, SqlDataSourceSelectingEventArgs e) { e.Command.Parameters["@CreatedBy"].Value = Context.User.Identity.Name; } DetailsView properties: autogen fields: off, default mode: insert, it shows all the fields that may not be expected by the procedure like ID (primary key) not required in procedure and CreatedBy (user id ) field . So I tried removing the 2 fields from detailsview and shows error Cannot insert the value NULL into column 'CreatedBy', table 'D:\OAS\OAS\APP_DATA\ASPNETDB.MDF.dbo.OASTest'; column does not allow nulls. INSERT fails. The statement has been terminated. For some reason parameters value is not being set. Can anybody bother to understand this and help?

    Read the article

  • Destroy process-less console windows left by Visual Studio debug sessions

    - by jon hanson
    A known bug with security update KB978037 can occur with Visual Studio 2003 (and 2008) where sometimes if you restart a debugging session on a console app then the console window doesn't get closed even though the owner process no longer exists. The problem is discussed further here: http://stackoverflow.com/questions/2402875/visual-studio-debug-console-sometimes-stays-open-and-is-impossible-to-close These zombie windows then can not be closed via the Taskbar or via the TaskManager, and typically require a power off/on to get rid of them. Over the period of even a single day you can accumulate quite a few of them, which clog up your TaskBar and are generally annoying. I thought I would knock up a simple C++ Win32 utility to attempt to call DestroyWindow() on these windows by passing the windows handle as a cmd-line argument and converting it to a HWND. I'm converting the handle from a string by parsing it as a DWORD then casting the DWORD to a HWND. This appears to be working as if I call GetWindowInfo() on the handle it succeeds. However calling DestroyWindow() on the handle fails with error 5 (access denied), presumably because the caller process (i.e. my app) doesn't own the window in question. Any ideas as to how I might get rid of the zombie windows, either via the above approach or any other alternative short of rebooting? I'm in a corporate environment so installing/uninstalling updates/service-packs etc isn't an option.

    Read the article

  • Setting an Excel Range with an Array using Python and comtypes?

    - by technomalogical
    Using comtypes to drive Python, it seems some magic is happening behind the scenes that is not converting tuples and lists to VARIANT types: # RANGE(“C14:D21”) has values # Setting the Value on the Range with a Variant should work, but # list or tuple is not getting converted properly it seems >>>from comtypes.client import CreateObject >>>xl = CreateObject("Excel.application") >>>xl.Workbooks.Open(r'C:\temp\my_file.xlsx') >>>xl.Visible = True >>>vals=tuple([(x,y) for x,y in zip('abcdefgh',xrange(8))]) # creates: #(('a', 0), ('b', 1), ('c', 2), ('d', 3), ('e', 4), ('f', 5), ('g', 6), ('h', 7)) >>>sheet = xl.Workbooks[1].Sheets["Sheet1"] >>>sheet.Range["C14","D21"].Value() (('foo',1),('foo',2),('foo',3),('foo',4),('foo',6),('foo',6),('foo',7),('foo',8)) >>>sheet.Range["C14","D21"].Value[()] = vals # no error, this blanks out the cells in the Range According to the comtypes docs: When you pass simple sequences (lists or tuples) as VARIANT parameters, the COM server will receive a VARIANT containing a SAFEARRAY of VARIANTs with the typecode VT_ARRAY | VT_VARIANT. This seems to be inline with what MSDN says about passing an array to a Range's Value. I also found this page showing something similar in C#. Can anybody tell me what I'm doing wrong? EDIT I've come up with a simpler example that performs the same way (in that, it does not work): >>>from comtypes.client import CreateObject >>>xl = CreateObject("Excel.application") >>>xl.Workbooks.Add() >>>sheet = xl.Workbooks[1].Sheets["Sheet1"] # at this point, I manually typed into the range A1:B3 >>> sheet.Range("A1","B3").Value() ((u'AAA', 1.0), (u'BBB', 2.0), (u'CCC', 3.0)) >>>sheet.Range("A1","B3").Value[()] = [(x,y) for x,y in zip('xyz',xrange(3))] # Using a generator expression, per @Mike's comment # However, this still blanks out my range :(

    Read the article

  • Uploading to S3 using Curl

    - by Carl Crawley
    Hi All, I'm currently using cURL to upload a file from my server to S3 using AJAX to call the script. So I have the following: $fullfilepath = '/server/sitepath/files/' . $_POST['file']; $upload_url = 'https://'.$_POST['buckets'].'.s3.amazonaws.com/'; $params = array( 'key'=>$_POST['key'], 'AWSAccessKeyId'=>$_POST['AWSAccessKeyId'], 'acl'=>$_POST['acl'], 'success_action_status'=>$_POST['success_action_status'], 'policy'=>$_POST['policy'], 'signature'=>$_POST['signature'], 'Content-Type'=>$_POST['Content-Type'], 'file'=>"@$fullfilepath" ); $ch = curl_init(); curl_setopt($ch, CURLOPT_VERBOSE, 1); curl_setopt($ch, CURLOPT_URL, $upload_url); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); curl_setopt($ch, CURLOPT_POSTFIELDS, $params); $response = curl_exec($ch); curl_close($ch); echo $response; However, I'm getting an S3 error as follows when it posts and I'm unsure why because I'm not passing JSON to it. <?xml version="1.0" encoding="UTF-8"?> <Error><Code>InvalidPolicyDocument</Code><Message>Invalid Policy: Invalid JSON.</Message><RequestId>B29469C6151BE0E8</RequestId><HostId>BFPk6W2kt1b6hTtx0mEq6dWdN/IhO0gNR5bct//7LAOwJxm1C3PrxS4RPv1blzJ8</HostId></Error> I've googled it for the last hour or so and can't seem to figure it out. If I change the order of the Array fields, it gives me a different error - I believe the order of the posted fields is important somehow. any help would be much appreciated! C

    Read the article

  • asp.net mvc: What is the correct way to return html from controller to refresh select list?

    - by Mark Redman
    Hi, I am new to ASP.NET MVC, particularly ajax operations. I have a form with a jquery dialog for adding items to a drop-down list. This posts to the controller action. If nothing (ie void method) is returned from the Controller Action the page returns having updated the database, but obviously there no chnage to the form. What would be the best practice in updating the drop down list with the added id/value and selecting the item. I think my options are: 1) Construct and return the html manually that makes up the new <select> tag [this would be easy enough and work, but seems like I am missing something] 2) Use some kind of "helper" to construct the new html [This seems to make sense] 3) Only return the id/value and add this to the list and select the item [This seems like an overkill considering the item needs to be placed in the correct order etc] 4) Use some kind of Partial View [Does this mean creating additional forms within ascx controls? not sure how this would effect submitting the main form its on? Also unless this is reusable by passing in parameters(not sure how thats done) maybe 2 is the option?] UPDATE: Having looked around a bit, it seems that generating html withing the controller is not a good idea. I have seen other posts that render partialviews to strings which I guess is what I need and separates concerns (since the html bits are in the ascx). Any comments on whether that is good practice.

    Read the article

  • Using Silverlight for Views in ASP.Net MVC - a bad idea?

    - by bplus
    I'm currently writing a small application for use internally at my office. I started out teaching myself some MVC (I've been a C# dev for 3 years). One of the main requirements is editable grids - I quickly realised that silverlight (i have zero silverlight experience) could be a big help in this. I've managed to create a proof of concept of getting MVC and silverlight to talk back an forth by combining these two techniques: Creating a Rest API using MVC MVC SilverLight I also got some help on stackoverflow: silverlight-grids-mvc-http-post Essentially all I'm doing is embedding a silver light object in a view. Serializing the Model data as JSON and passing it to silverlight(using intit params written into the response). The silverlight object can post data back to the controller as JSON. So far this seems like it could work quite well. However I am a bit concerned that I could be painting myself into a corner with this approach, as in I don't have much experience with either technology so I'm worried I'm going get hit with something further down the line that I won't be able to work around. Has anybody else tried doing this? Any advice would be much appreciated!

    Read the article

  • Routing problem with calling a new method without an ID

    - by alkaloids
    I'm trying to put together a form_tag that edits several Shift objects. I have the form built properly, and it's passing on the correct parameters. I have verified that the parameters work with updating the objects correctly in the console. However, when I click the submit button, I get the error: ActiveRecord::RecordNotFound in ShiftsController#update_individual Couldn't find Shift without an ID My route for the controller it is calling looks like this looks like this: map.resources :shifts, :collection => { :update_individual => :put } The method in ShiftsController is this: def update_individual Shift.update(params[:shifts].keys, params[:shifts].values) flash[:notice] = "Schedule saved" end The relevant form parts are these: <% form_tag( update_individual_shifts_path ) do %> ... (fields for...) <%= submit_tag "Save" %> <% end %> Why is this not working? If I browse to the url: "http://localhost:3000/shifts/update_individual/5" (or any number that corresponds to an existing shift), I get the proper error about having no parameters set, but when I pass parameters without an ID of some sort, it errors out. How do I make it stop looking for an ID at the end of the URL?

    Read the article

  • Undefined method `add' on a cucumber step that usually works.

    - by Josiah Kiehl
    I have a path defined: when /the admin home\s?page/ "/admin/" I have scenario that is passing: Scenario: Let admins see the admin homepage Given "pojo" is logged in And "pojo" is an "admin" And I am on the admin home page Then I should see "Hi there." And I have a scenario that is failing: Scenario: Review flagged photo Given "pojo" is logged in And "pojo" is an "admin" ...bunch of steps that create stuff in the database... And I am on the admin home page Then ... the rest of the steps The step that fails in the second one is "And I am on the admin home page" which passes just fine in the first scenario. Here's the error I get: And I am on the admin home page # features/step_definitions/web_steps.rb:18 undefined method `add' for {}:Hash (NoMethodError) ./app/controllers/admin_controller.rb:13:in `index' ./app/controllers/admin_controller.rb:11:in `each' ./app/controllers/admin_controller.rb:11:in `index' /usr/lib/ruby/1.8/benchmark.rb:308:in `realtime' ./features/step_definitions/web_steps.rb:19:in `/^(?:|I )am on (.+)$/' features/admin.feature:52:in `And I am on the admin home page' This is very odd... why would it be fine in the first case, and not in the second where the only difference are a bunch of steps that create records in the db? [edit] Here's the add stuff to database step: Given /^there is a "([^\"]*)" with the following:$/ do |model, table| model.constantize.create!(table.rows_hash) end

    Read the article

  • DOS Batch file to echo a specific line number

    - by Lee
    So for the second part of my current dilemma, I have a list of folders in "c:\file_list.txt". I need to be able to extract them (well, echo them with some mods) based on the line number because this batch script is being called by an iterative macro process. I'm passing the line number as a parameter. @echo off setlocal enabledelayedexpansion set /a counter=0 set /a %%a = "" for /f "usebackq delims=" %%a in (c:\file_list.txt) do (if "!counter!"=="%1" goto :printme & set /a counter+=1) :printme echo %%a which gives me an output of "%a". Doh! So, I've tried echoing !a! (result: "ECHO is off."); I've tried echoing %a (result: a) I figured the easy thing to do would be to modify the "head.bat" code found here: http://stackoverflow.com/questions/130116/dos-batch-commands-to-read-first-line-from-text-file except rather than echoing every line - I'd just echo the last line found. Not as simple as one might think. I've noticed that my counter is staying at zero for some reason; I'm wondering if the "set /a counter+=1" is doing what I think it's doing.

    Read the article

  • Displaying tree path of record in SQL Server 2005

    - by jskiles1
    An example of my tree table is: ([id] is an identity) [id], [parent_id], [path] 1, NULL, 1 2, 1, 1-2 3, 1, 1-3 4, 3, 1-3-4 My goal is to query quickly for multiple rows of this table and view the full path of the node from its root, through its superiors, down to itself. The ultimate question is, should I generate this path on inserts and maintain it in its own column or generate this path on query to save disk space? I guess it depends if this table is write heavy or read heavy. I've been contemplating several approaches to using the "path" characteristic of this parent/child relationship and I just can't seem to settle on one. This "path" is simply for display purposes and serves absolutely no purpose other than that. Here is what I have done to implement this "path." AFTER INSERT TRIGGER - requires passing a NULL path to the insert and updating the path for the record at the inserted rows identity INSTEAD OF INSERT TRIGGER - does not require insert to have NULL path passed, but does require the trigger to insert with a NULL path and updating the path for the record at SCOPE_IDENTITY() STORED PROCEDURE - requiring all inserts into this table to be done through the stored procedure implementing the trigger logic VIEW - requires building the path in the view 1 and 2 seem annoying if massive amounts of data are entered at once. 3 seems annoying because all inserts must go through the procedure in order to have a valid path populated. 1, 2, and 3 require maintaining a path column on the table. 4 removes all the limitations of the above but require the view to perform the path logic and requires use of the view if a path is to be displayed. I have successfully implemented all of the above approaches and I'm mainly looking for some advice. Am I way off the mark here or are any of the above acceptable? Each has it's advantages and disadvantages.

    Read the article

  • Pass a range into a custom function from within a cell

    - by Luis
    Hi I'm using VBA in Excel and need to pass in the values from two ranges into a custom function from within a cell's formula. The function looks like this: Public Function multByElement(range1 As String, range2 As String) As Variant Dim arr1() As Variant, arr2() As Variant arr1 = Range(range1).value arr2 = Range(range2).value If UBound(arr1) = UBound(arr2) Then Dim arrayA() As Variant ReDim arrayA(LBound(arr1) To UBound(arr1)) For i = LBound(arr1) To UBound(arr1) arrayA(i) = arr1(i) * arr2(i) Next i multByElement = arrayA End If End Function As you can see, I'm trying to pass the string representation of the ranges. In the debugger I can see that they are properly passed in and the first visible problem occurs when it tries to read arr1(i) and shows as "subscript out of range". I have also tried passing in the range itself (ie range1 as Range...) but with no success. My best suspicion was that it has to do with the Active Sheet since it was called from a different sheet from the one with the formula (the sheet name is part of the string) but that was dispelled since I tried it both from within the same sheet and by specifying the sheet in the code. BTW, the formula in the cell looks like this: =AVERAGE(multByElement("A1:A3","B1:B3")) or =AVERAGE(multByElement("My Sheet1!A1:A3","My Sheet1!B1:B3")) for when I call it from a different sheet.

    Read the article

  • Pass Parameter to Subroutine in Codebehind

    - by Sanjubaba
    I'm trying to pass an ID of an activity (RefNum) to a Sub in my codebehind. I know I'm supposed to use parentheses when passing parameters to subroutines and methods, and I've tried a number of ways and keep receiving the following error: BC30203: Identifier expected. I'm hard-coding it on the front-end just to try to get it to pass [ OnDataBound="FillSectorCBList("""WK.002""")" ], but it's obviously wrong. :( Front-end: <asp:DetailsView ID="dvEditActivity" AutoGenerateRows="False" DataKeyNames="RefNum" OnDataBound="dvSectorID_DataBound" OnItemUpdated="dvEditActivity_ItemUpdated" DataSourceID="dsEditActivity" > <Fields> <asp:TemplateField> <ItemTemplate> <br /><span style="color:#0e85c1;font-weight:bold">Sector</span><br /><br /> <asp:CheckBoxList ID="cblistSector" runat="server" DataSourceID="dsGetSectorNames" DataTextField="SectorName" DataValueField="SectorID" OnDataBound="FillSectorCBList("""WK.002""")" ></asp:CheckBoxList> <%-- Datasource to populate cblistSector --%> <asp:SqlDataSource ID="dsGetSectorNames" runat="server" ConnectionString="<%$ ConnectionStrings:dbConn %>" ProviderName="<%$ ConnectionStrings:dbConn.ProviderName %>" SelectCommand="SELECT SectorID, SectorName from Sector ORDER BY SectorID"></asp:SqlDataSource> </ItemTemplate> </asp:TemplateField> </Fields> </asp:DetailsView> Code-behind: Sub FillSectorCBList(ByVal RefNum As String, ByVal sender As Object, ByVal e As System.EventArgs) Dim SectorIDs As New ListItem Dim myConnection As String = ConfigurationManager.ConnectionStrings("dbConn").ConnectionString() Dim objConn As New SqlConnection(myConnection) Dim strSQL As String = "SELECT DISTINCT A.RefNum, AS1.SectorID, S.SectorName FROM Activity A LEFT OUTER JOIN Activity_Sector AS1 ON AS1.RefNum = A.RefNum LEFT OUTER JOIN Sector S ON AS1.SectorID = S.SectorID WHERE A.RefNum = @RefNum ORDER BY A.RefNum" Dim objCommand As New SqlCommand(strSQL, objConn) objCommand.Parameters.AddWithValue("RefNum", RefNum) Dim ad As New SqlDataAdapter(objCommand) Try [Code] Finally [Code] End Try objCommand.Connection.Close() objCommand.Dispose() objConn.Close() End Sub Any advice would be great. I'm not sure if I even have the right approach. Thank you!

    Read the article

  • What is the best way to use Guice and JMock together?

    - by Yishai
    I have started using Guice to do some dependency injection on a project, primarily because I need to inject mocks (using JMock currently) a layer away from the unit test, which makes manual injection very awkward. My question is what is the best approach for introducing a mock? What I currently have is to make a new module in the unit test that satisfies the dependencies and bind them with a provider that looks like this: public class JMockProvider<T> implements Provider<T> { private T mock; public JMockProvider(T mock) { this.mock = mock; } public T get() { return mock; } } Passing the mock in the constructor, so a JMock setup might look like this: final CommunicationQueue queue = context.mock(CommunicationQueue.class); final TransactionRollBack trans = context.mock(TransactionRollBack.class); Injector injector = Guice.createInjector(new AbstractModule() { @Override protected void configure() { bind(CommunicationQueue.class).toProvider(new JMockProvider<QuickBooksCommunicationQueue>(queue)); bind(TransactionRollBack.class).toProvider(new JMockProvider<TransactionRollBack>(trans)); } }); context.checking(new Expectations() {{ oneOf(queue).retrieve(with(any(int.class))); will(returnValue(null)); never(trans); }}); injector.getInstance(RunResponse.class).processResponseImpl(-1); Is there a better way? I know that AtUnit attempts to address this problem, although I'm missing how it auto-magically injects a mock that was created locally like the above, but I'm looking for either a compelling reason why AtUnit is the right answer here (other than its ability to change DI and mocking frameworks around without changing tests) or if there is a better solution to doing it by hand.

    Read the article

  • Save PyML.classifiers.multi.OneAgainstRest(SVM()) object?

    - by Michael Aaron Safyan
    I'm using PYML to construct a multiclass linear support vector machine (SVM). After training the SVM, I would like to be able to save the classifier, so that on subsequent runs I can use the classifier right away without retraining. Unfortunately, the .save() function is not implemented for that classifier, and attempting to pickle it (both with standard pickle and cPickle) yield the following error message: pickle.PicklingError: Can't pickle : it's not found as __builtin__.PySwigObject Does anyone know of a way around this or of an alternative library without this problem? Thanks. Edit/Update I am now training and attempting to save the classifier with the following code: mc = multi.OneAgainstRest(SVM()); mc.train(dataset_pyml,saveSpace=False); for i, classifier in enumerate(mc.classifiers): filename=os.path.join(prefix,labels[i]+".svm"); classifier.save(filename); Notice that I am now saving with the PyML save mechanism rather than with pickling, and that I have passed "saveSpace=False" to the training function. However, I am still gettting an error: ValueError: in order to save a dataset you need to train as: s.train(data, saveSpace = False) However, I am passing saveSpace=False... so, how do I save the classifier(s)? P.S. The project I am using this in is pyimgattr, in case you would like a complete testable example... the program is run with "./pyimgattr.py train"... that will get you this error. Also, a note on version information: [michaelsafyan@codemage /Volumes/Storage/classes/cse559/pyimgattr]$ python Python 2.6.1 (r261:67515, Feb 11 2010, 00:51:29) [GCC 4.2.1 (Apple Inc. build 5646)] on darwin Type "help", "copyright", "credits" or "license" for more information. import PyML print PyML.__version__ 0.7.0

    Read the article

  • Generating dynamic css using php and javascript

    - by Onkar Deshpande
    I want to generate tooltip based on a dynamically changing background image in css. This is my my_css.php file. <?php header('content-type: text/css'); $i = $_GET['index']; if($i == 0) $bg_image_path = "../bg_red.jpg"; elseif ($i == 1) $bg_image_path = "../bg_yellow.jpg"; elseif ($i == 2) $bg_image_path = "../bg_green.jpg"; elseif ($i == 3) $bg_image_path = "../bg_blue.jpg"; ?> .tooltip { white-space: nowrap; color:green; font-weight:bold; border:1px solid black;; font-size:14px; background-color: white; margin: 0; padding: 7px 4px; border: 1px solid black; background-image: url(<?php echo $bg_image_path; ?>); background-repeat:repeat-x; font-family: Helvetica,Arial,Sans-Serif; font-family: Times New Roman,Georgia,Serif; filter:alpha(opacity=85); opacity:0.85; zoom: 1; } In order to use this css I added <link rel="stylesheet" href="css/my_css.php" type="text/css" media="screen" /> in my html <head> tag of javascript code. I am thinking of passing different values of 'index' so that it would generate the background image dynamically. Can anyone tell me how should I pass such values from a javascript ? I am creating the tooltip using var tooltip = document.createElement("div"); document.getElementById("map").appendChild(tooltip); tooltip.style.visibility="hidden"; and I think before calling this createElement, I should set background image.

    Read the article

  • Is there a way to programmatically tell if particular block of memory was not freed by FastMM?

    - by Wodzu
    I am trying to detect if a block of memory was not freed. Of course, the manager tells me that by dialog box or log file, but what if I would like to store results in a database? For example I would like to have in a database table a names of routines which allocated given blocks. After reading a documentation of FastMM I know that since version 4.98 we have a possibility to be notified by manager about memory allocations, frees and reallocations as they occur. For example OnDebugFreeMemFinish event is passing to us a PFullDebugBlockHeader which contains useful informations. There is one thing that PFullDebugBlockHeader is missing - the information if the given block was freed by the application. Unless OnDebugFreeMemFinish is called only for not freed blocks? This is which I do not know and would like to find out. The problem is that even hooking into OnDebugFreeMemFinish event I was unable to find out if the block was freed or not. Here is an example: program MemLeakTest; {$APPTYPE CONSOLE} uses FastMM4, ExceptionLog, SysUtils; procedure MemFreeEvent(APHeaderFreedBlock: PFullDebugBlockHeader; AResult: Integer); begin //This is executed at the end, but how should I know that this block should be freed //by application? Unless this is executed ONLY for not freed blocks. end; procedure Leak; var MyObject: TObject; begin MyObject := TObject.Create; end; begin OnDebugFreeMemFinish := MemFreeEvent; Leak; end. What I am missing is the callback like: procedure OnMemoryLeak(APointer: PFullDebugBlockHeader); After browsing the source of FastMM I saw that there is a procedure: procedure LogMemoryLeakOrAllocatedBlock(APointer: PFullDebugBlockHeader; IsALeak: Boolean); which could be overriden, but maybe there is an easier way?

    Read the article

  • Advice on Method overloads.

    - by Muhammad Kashif Nadeem
    Please see following methods. public static ProductsCollection GetDummyData(int? customerId, int? supplierId) { try { if (customerId != null && customerId > 0) { Filter.Add(Customres.CustomerId == customerId); } if (supplierId != null && supplierId > 0) { Filter.Add(Suppliers.SupplierId == supplierId); } ProductsCollection products = new ProductsCollection(); products.FetchData(Filter); return products; } catch { throw; } } public static ProductsCollection GetDummyData(int? customerId) { return ProductsCollection GetDummyData(customerId, (int?)null); } public static ProductsCollection GetDummyData() { return ProductsCollection GetDummyData((int?)null); } 1- Please advice how can I make overloads for both CustomerId and SupplierId because only one overload can be created with GetDummyData(int? ). Should I add another argument to mention that first argument is CustomerId or SupplierId for example GetDummyData(int?, string). OR should I use enum as 2nd argument and mention that first argument is CustoemId or SupplierId. 2- Is this condition is correct or just checking 0 is sufficient - if (customerId != null && customerId 0) 3- Using Try/catch like this is correct? 4- Passing (int?)null is correct or any other better approach. Edit: I have found some other posts like this and because I have no knowledge of Generics that is why I am facing this problem. Am I right? Following is the post. http://stackoverflow.com/questions/422625/overloaded-method-calling-overloaded-method

    Read the article

  • Extracting shell script from parameterised Hudson job

    - by Jonik
    I have a parameterised Hudson job, used for some AWS deployment stuff, which in one build step runs certain shell commands. However, that script has become sufficiently complicated that I want to "extract" it from Hudson to a separate script file, so that it can easily be versioned properly. The Hudson job would then simply update from VCS and execute the external script file. My main question is about passing parameters to the script. I have a Hudson parameter named AMI_ID and a few others. The script references those params as if they were environment variables: echo "Using AMI $AMI_ID and type $TYPE" Now, this works fine inside Hudson, but not if Hudson calls an external script. Could I somehow make Hudson set the params as environment variables so that I don't need to change the script? Or is my best option to alter the script to take command line parameters (and possibly assign those to named variables for readability: ami_id=$1; type=$2; ... )? I tried something like this but the script doesn't get correctly replaced values: export AMI_ID=$AMI_ID export TYPE=$TYPE external-script.sh # this tries to use e.g. $AMI_ID Bonus question: when the script is inside Hudson, the "console output" will contain both the executed commands and their output. This is extremely useful for debugging when something goes wrong with a build! For example, here the line starting with "+" is part of the script and the following line its output: + ec2-associate-address -K pk.pem -C cert.pem 77.125.116.139 -i i-aa3487fd ADDRESS 77.125.116.139 i-aa3487fd When calling an external script, Hudson output will only contain the latter line, making debugging harder. I could cat the script file to stdout before running it, but that's not optimal either. In effect, I'd like a kind of DOS-style "echo on" for the script which I'm calling from Hudson - anyone know a trick to achieve this?

    Read the article

  • Does Perl auto-vivify variables used as references in subroutine calls?

    - by FM
    I've declared 2010 to be the year of higher-order programming, so I'm learning Haskell. The introduction has a slick quick-sort demo, and I thought, "Hey, that's easy to do in Perl". It turned to be easier than I expected. Note that I don't have to worry about whether my partitions ($less and $more) are defined. Normally you can't use an undefined value as an array reference. use strict; use warnings; use List::MoreUtils qw(part); my @data = (5,6,7,4,2,9,10,9,5,1); my @sorted = qsort(@data); print "@sorted\n"; sub qsort { return unless @_; my $pivot = shift @_; my ($less, $more) = part { $_ < $pivot ? 0 : 1 } @_; # Works, even though $less and $more are sometimes undefined. return qsort(@$less), $pivot, qsort(@$more); } As best I can tell, Perl will auto-vivify a variable that you try to use as a reference -- but only if you are passing it to a subroutine. For example, my call to foo() works, but not the attempted print. use Data::Dumper qw(Dumper); sub foo { print "Running foo(@_)\n" } my ($x); print Dumper($x); # Fatal: Can't use an undefined value as an ARRAY reference. # print @$x, "\n"; # But this works. foo(@$x); # Auto-vivification: $x is now []. print Dumper($x); My questions: Am I understanding this behavior correctly? What is the explanation or reasoning behind why Perl does this? Is this behavior explained anywhere in the docs?

    Read the article

  • Error setting env thru subprocess.call to run a python script on a remote linux machine

    - by John Smith
    I am running a python script on a windows machine to invoke another python script on a remote linux machine. I am using subprocess.call with ssh to do this, like below: subprocess.call('ssh -i <identify file> username@hostname python <script_on_linux_machine>') and this works fine. However, if I want to set some environment variables, like below: subprocess.call('ssh -i <identify file> username@hostname python <script_on_linux_machine>', env={key1:value1}) it fails. I get the following error: ssh_connect: getnameinfo failed ssh: connect to host <hostname> port 22: Operation not permitted 255 I've tried splitting the ssh commands into list and passing. Didn't help. I've tried to run other 'local'(windows) commands thru subprocess.call() and tried setting the env. It works fine. I've tried to run other commands(such as ls) on the remote linux machine. Again, subprocess.call() works fine, as long as I don't try to set the environment. What am I doing wrong? Would I be able to set the environment for a python script on a remote machine? Any help will be appreciated.

    Read the article

< Previous Page | 121 122 123 124 125 126 127 128 129 130 131 132  | Next Page >