Search Results

Search found 44902 results on 1797 pages for 'call naive true'.

Page 552/1797 | < Previous Page | 548 549 550 551 552 553 554 555 556 557 558 559  | Next Page >

  • Why is this CHOICE element not getting assigned in my SharePoint Field definition schema?

    - by ccornet
    I defined a new field of the type "Choice" for my web application. It will serve basically as a pseudo-lookup as its contents are defined by the value of a Text field in a list. It is initialized with a dummy choice to begin with (I'm under the impression a choice field needs at least one choice when defined), which is replaced with a real choice later on. But for some reason, this dummy choice is never actually added to the choices! Below is the XML Schema for the field in question. <Field ID="{ALICEH-ASFA-KEGU-IDLISTED}" Name="ddlSystems" Group="Lookup Columns" DisplayName="ddlSystems" Type="Choice" Sealed="FALSE" ReadOnly="FALSE" Hidden="FALSE" FillInChoice="TRUE" DisplaceOnUpgrade="TRUE"> <CHOICES> <CHOICE>BLANULL</CHOICE> </CHOICES> <Default>BLANULL</Default> </Field> Initially, I used a default choice of (a single space), but I changed it to BLANULL so that I can parse an actual word instead of a veritably empty string. Now, even after having uninstalled and reinstalled the feature with this field, I have a choice field that has (still a single space) as the only choice. Even more perplexing, BLANULL is actually listed for the default value in both the UI and the object model! What is causing this problem, and how can I circumvent it so that I don't have to manually set this dummy value each time?

    Read the article

  • Handling Errors in PHP

    - by Mike
    I have a custom class that, when called, will redirect to a page and send a 'message_type' and 'message' variable via GET. When the page opens it checks for these variables and displays a 'success', 'warning', or 'error' message depending on the 'message_type' variable. I made it so the user thinks they stay on the same page. It also allows for other variables to be passed along with the message. Is this good practice, or should I just start using exceptions? Example: //Call a static function that will redirect to a page, with an error message RedirectWithMessage::go('somepage.php', MessageType::ERROR, 'Error message here.'); The following checkMessage() function is an include file: function checkMessage() { if((isset($_GET['message_type']) && strlen($_GET['message_type'])) && (isset($_GET['message']) && strlen($_GET['message_type']))) { DisplayMessage::display($_GET['message_type'], $_GET['message']); return true; } return false; } On the page that is redirected to, call checkMessage(); //If a message is received, display it. If not, do nothing checkMessage(); I know this might be vague, and I can supply more code if necessary. I guess the issue is that I don't have much experience using exceptions, but I think they seem cumbersome (writing try-catch blocks everywhere). Please let me know if I am making this more difficult for myself or if there is a better solution. Thanks! Mike

    Read the article

  • Unexpected performance curve from CPython merge sort

    - by vkazanov
    I have implemented a naive merge sorting algorithm in Python. Algorithm and test code is below: import time import random import matplotlib.pyplot as plt import math from collections import deque def sort(unsorted): if len(unsorted) <= 1: return unsorted to_merge = deque(deque([elem]) for elem in unsorted) while len(to_merge) > 1: left = to_merge.popleft() right = to_merge.popleft() to_merge.append(merge(left, right)) return to_merge.pop() def merge(left, right): result = deque() while left or right: if left and right: elem = left.popleft() if left[0] > right[0] else right.popleft() elif not left and right: elem = right.popleft() elif not right and left: elem = left.popleft() result.append(elem) return result LOOP_COUNT = 100 START_N = 1 END_N = 1000 def test(fun, test_data): start = time.clock() for _ in xrange(LOOP_COUNT): fun(test_data) return time.clock() - start def run_test(): timings, elem_nums = [], [] test_data = random.sample(xrange(100000), END_N) for i in xrange(START_N, END_N): loop_test_data = test_data[:i] elapsed = test(sort, loop_test_data) timings.append(elapsed) elem_nums.append(len(loop_test_data)) print "%f s --- %d elems" % (elapsed, len(loop_test_data)) plt.plot(elem_nums, timings) plt.show() run_test() As much as I can see everything is OK and I should get a nice N*logN curve as a result. But the picture differs a bit: Things I've tried to investigate the issue: PyPy. The curve is ok. Disabled the GC using the gc module. Wrong guess. Debug output showed that it doesn't even run until the end of the test. Memory profiling using meliae - nothing special or suspicious. ` I had another implementation (a recursive one using the same merge function), it acts the similar way. The more full test cycles I create - the more "jumps" there are in the curve. So how can this behaviour be explained and - hopefully - fixed? UPD: changed lists to collections.deque UPD2: added the full test code UPD3: I use Python 2.7.1 on a Ubuntu 11.04 OS, using a quad-core 2Hz notebook. I tried to turn of most of all other processes: the number of spikes went down but at least one of them was still there.

    Read the article

  • MonoRail ActiveRecord - The UPDATE statement conflicted with the FOREIGN KEY SAME TABLE

    - by Justin
    Hey all, I'm new to MonoRail and ActiveRecord and have inherited an application that I need to modify. I have a Category table (Id, Name) and I'm adding a ParentId FK which references the Id in the same table so that you can have parent/child category relationships. I tried adding this to my Category model class: [Property] public int ParentId { get; set; } I also tried doing it this way: [BelongsTo("ParentId")] public Category Parent { get; set; } When I call a method to get all parent categories (they have a null ParentId), it works fine: public static Category[] GetParentCategories() { var criteria = DetachedCriteria.For<Core.Models.Category>(); return (FindAllByProperty("ParentId", null)); } However, when I call a method to get all child categories within a specific category, it errors out: public static Category[] GetChildCategories(int parentId) { var criteria = DetachedCriteria.For<Core.Models.Category>(); return (FindAllByProperty("ParentId", parentId)); } The error is: "The UPDATE statement conflicted with the FOREIGN KEY SAME TABLE constraint \"FK_Category_ParentId\". The conflict occurred in database \"UCampus\", table \"dbo.Category\", column 'Id'.\r\nThe statement has been terminated." I'm hard-coding in the parentId parameter as 1 and I'm 100% sure it exists as an id in the Category table so I don't know why it'd give this error. Also, I'm doing a select, not an update, so what is going on here?? Thanks for any input on this, Justin

    Read the article

  • Save a form in an XML file using Ajax and JSP

    - by novellino
    Hello, I want to create a simple form with a name and an email and save these data in an XML file. So far I found that using Ajax with jQuery is quite easy. So I used the usual code: //dataString have the values taken from the form var dataString = 'name='+ name + '&email=' + email; $.ajax({ type: "POST", url: "users.xml", data: dataString, dataType: "xml", success: function() { .... } }); If I understood well, in the url I should add the name of the XML file that will be created. When the user clicks a button I call the function with the Ajax request, and then I should call somewhere a function for generating the xml. I am using also two beans. One is for setting the elements of the user and the other is for saving the data in the XML. I am using the XStream library for the xml although I don't know if is the best solution. The problem now it that I can not connect all these together in order to save the data in the XML. Does anyone know what should I do? Thanks a lot!

    Read the article

  • Calling PHP functions within HEREDOC strings

    - by Doug Kavendek
    In PHP, the HEREDOC string declarations are really useful for outputting a block of html. You can have it parse in variables just by prefixing them with $, but for more complicated syntax (like $var[2][3]), you have to put your expression inside {} braces. In PHP 5, it is possible to actually make function calls within {} braces inside a HEREDOC string, but you have to go through a bit of work. The function name itself has to be stored in a variable, and you have to call it like it is a dynamically-named function. For example: $fn = 'testfunction'; function testfunction() { return 'ok'; } $string = <<< heredoc plain text and now a function: {$fn()} heredoc; As you can see, this is a bit more messy than just: $string = <<< heredoc plain text and now a function: {testfunction()} heredoc; There are other ways besides the first code example, such as breaking out of the HEREDOC to call the function, or reversing the issue and doing something like: ?> <!-- directly outputting html and only breaking into php for the function --> plain text and now a function: <?PHP print testfunction(); ?> The latter has the disadvantage that the output is directly put into the output stream (unless I'm using output buffering), which might not be what I want. So, the essence of my question is: is there a more elegant way to approach this? Edit based on responses: It certainly does seem like some kind of template engine would make my life much easier, but it would require me basically invert my usual PHP style. Not that that's a bad thing, but it explains my inertia.. I'm up for figuring out ways to make life easier though, so I'm looking into templates now.

    Read the article

  • BindingList<T> and reflection!

    - by Aren B
    Background Working in .NET 2.0 Here, reflecting lists in general. I was originally using t.IsAssignableFrom(typeof(IEnumerable)) to detect if a Property I was traversing supported the IEnumerable Interface. (And thus I could cast the object to it safely) However this code was not evaluating to True when the object is a BindingList<T>. Next I tried to use t.IsSubclassOf(typeof(IEnumerable)) and didn't have any luck either. Code /// <summary> /// Reflects an enumerable (not a list, bad name should be fixed later maybe?) /// </summary> /// <param name="o">The Object the property resides on.</param> /// <param name="p">The Property We're reflecting on</param> /// <param name="rla">The Attribute tagged to this property</param> public void ReflectList(object o, PropertyInfo p, ReflectedListAttribute rla) { Type t = p.PropertyType; //if (t.IsAssignableFrom(typeof(IEnumerable))) if (t.IsSubclassOf(typeof(IEnumerable))) { IEnumerable e = p.GetValue(o, null) as IEnumerable; int count = 0; if (e != null) { foreach (object lo in e) { if (count >= rla.MaxRows) break; ReflectObject(lo, count); count++; } } } } The Intent I want to basically tag lists i want to reflect through with the ReflectedListAttribute and call this function on the properties that has it. (Already Working) Once inside this function, given the object the property resides on, and the PropertyInfo related, get the value of the property, cast it to an IEnumerable (assuming it's possible) and then iterate through each child and call ReflectObject(...) on the child with the count variable.

    Read the article

  • How to create column of type password in gridview?

    - by Preeti
    Hi, I am creating an application in which user selects files and provides credentials to open that file. For that i have created three columns in a gridview. User enters password in password column. I want to display '*' in place of characters like we can create a textbox of password type. I have tried this code on 'GridView_CellClick' event : if (GridView.Columns[e.ColumnIndex].HeaderText == "Password") { txtPassword[e.RowIndex] = new TextBox(); txtPassword[e.RowIndex].Name = "txtPassword"+e.RowIndex; txtPassword[e.RowIndex].PasswordChar = '*'; txtPassword[e.RowIndex].Visible = true; txtPassword[e.RowIndex].TextChanged += new if (GridView.CurrentCell.Value == null) txtPassword[e.RowIndex].Text = ""; else txtPassword[e.RowIndex].Text = GridView.CurrentCell.Value.ToString(); txtPassword[e.RowIndex].Location = GridView.GetCellDisplayRectangle(e.ColumnIndex, e.RowIndex + 1, false).Location; txtPassword[e.RowIndex].Size = GridView.GetCellDisplayRectangle(e.ColumnIndex, e.RowIndex + 1, false).Size; txtPassword[e.RowIndex].Visible = true; txtPassword[e.RowIndex].Focus(); } But in above solution characters are displayed. How can i solve this problem???

    Read the article

  • Apache CXF REST Services w/ Spring AOP

    - by jconlin
    I'm trying to get Apache CXF JAX-RS services working with Spring AOP. I've created a simple logging class: public class AOPLogger{ public void logBefore(){ System.out.println("Logging Before!"); } } My Spring configuration (beans.xml): <aop:config> <aop:aspect id="aopLogger" ref="test.aop.AOPLogger"> <aop:before method="logBefore" pointcut="execution(* test.rest.RestService(..))"/> </aop:aspect> </aop:config> <bean id="aopLogger" class="test.aop.AOPLogger"/> I always get an NPE in RestService when a call is made to a Method getServletRequest(), which has: return messageContext.getHttpServletRequest(); If I remove the aop configuration or comment it out from my beans.xml, everything works fine. All of my actual Rest services extend test.rest.RestService (which is a class) and call getServletRequest(). I'm just trying to just get AOP up and running based off of the example in the CXF JAX-RS documentation. Does anyone have any idea what I'm doing wrong? Thanks!

    Read the article

  • calling .ajax() from an eventHandler c# asp.ent

    - by ibininja
    Good day...! In the code behind (upload.aspx) I have an event that returns the number of bytes being streamed; and as I debug it, it works fine. I wanted to reflect the numbers returned from the eventHandler on a progress bar and this is where I got lost. I tried using jQuery's .ajax() function. this is how I implemented it: In the EventHandler in my code behind I added this code to call the .ajax() function: Page.ClientScript.RegisterStartupScript(this.GetType(), "UpdateProgress", "<script type='text/javascript'>updateProgress();</script>"); My plan is whenever the eventHandler function changes the values of bytes being streamed it calls the javascript function "updateProgress()" The .ajax() function "UpdateProgress()" is as: function updateProgress() { $.ajax({ type: "POST", url: "upload.aspx/GetData", data: "{}", contentType: "application/json; charset=utf-8", dataType: "json", async: true, success: function (msg) { $("#progressbar").progressbar("option", "value", msg.d); } }); } I made sure that the function GetData() is [System.Web.Services.WebMethod] and that it is static as well. so the workflow of what I am trying to implement is as: - Click On Upload button - The Behind code starts executing and EventHandler triggers - The EventHandler calls .ajax() function - The .ajax() function retrieves the bytes being streamed and updates progress bar. When I ran the code; all runs well except that the .ajax() is only executed when upload is finished (and progress bar also updates only when finished upload); even though I call .ajax() function every time in the eventHandler function as reflected above... What am I doing wrong? Am I thinking of this right? is there anything else I should add maybe an updatePanel or something? thank you

    Read the article

  • Triangle numbers problem....show within 4 seconds

    - by Daredevil
    The sequence of triangle numbers is generated by adding the natural numbers. So the 7th triangle number would be 1 + 2 + 3 + 4 + 5 + 6 + 7 = 28. The first ten terms would be: 1, 3, 6, 10, 15, 21, 28, 36, 45, 55, ... Let us list the factors of the first seven triangle numbers: 1: 1 3: 1,3 6: 1,2,3,6 10: 1,2,5,10 15: 1,3,5,15 21: 1,3,7,21 28: 1,2,4,7,14,28 We can see that 28 is the first triangle number to have over five divisors. Given an integer n, display the first triangle number having at least n divisors. Sample Input: 5 Output 28 Input Constraints: 1<=n<=320 I was obviously able to do this question, but I used a naive algorithm: Get n. Find triangle numbers and check their number of factors using the mod operator. But the challenge was to show the output within 4 seconds of input. On high inputs like 190 and above it took almost 15-16 seconds. Then I tried to put the triangle numbers and their number of factors in a 2d array first and then get the input from the user and search the array. But somehow I couldn't do it: I got a lot of processor faults. Please try doing it with this method and paste the code. Or if there are any better ways, please tell me.

    Read the article

  • Python DictReader - Skipping rows with missing columns?

    - by victorhooi
    heya, I have a Excel .CSV file I'm attempting to read in with DictReader. All seems to be well, except it seems to omit rows, specifically those with missing columns. Our input looks like: mail,givenName,sn,lorem,ipsum,dolor,telephoneNumber [email protected],ian,bay,3424,8403,2535,+65(2)34523534545 [email protected],mike,gibson,3424,8403,2535,+65(2)34523534545 [email protected],ross,martin,,,,+65(2)34523534545 [email protected],david,connor,,,,+65(2)34523534545 [email protected],chris,call,3424,8403,2535,+65(2)34523534545 So some of the rows have missing lorem/ipsum/dolor columns, and it's just a string of commas for those. We're reading it in with: def read_gd_dump(input_file="blah 20100423.csv"): gd_extract = csv.DictReader(open('blah 20100423.csv'), restval='missing', dialect='excel') return dict([(row['something'], row) for row in gd_extract]) And I checked that "something" (the key for our dict) isn't one of the missing columns, I had originally suspected it might be that. It's one of the columns after that. However, DictReader seems to completely skip over the rows. I tried setting restval to something, didn't seem to make any difference. I can't seem to find anything in Python's CSV docs (http://docs.python.org/library/csv.html) that would explain this behaviour, but I may have misread something. Any ideas? Thanks, Victor

    Read the article

  • Asp.Net tree view in SharePoint webpart- Input string error

    - by Faiz
    Hi All, I am facing a very strange issue. I have a SharePoint webpart that displays an asp.net tree view. It takes tree depth from a drop down. To improve performance of the tree view, i am setting the PopulateOnDemand property to true for the last level of the tree depth. For example, if i have a total of 10 levels in the data and the user selects tree depth as 3, then the third level data i set PopulateOnDemand to true. Now comes the strange part. When i click on the + image on the third level, and if there are children under that particular node then call back happens and node gets expanded. But if there no children for that particular node, then click + throws "Input string was not in the correct format" error. I have made sure that there is no server side error. Some things looks to be fishy when internet explorer is trying to bind construct the expanded node. Please let me know if any one faced similar issue or the resolution for the same? Thanks in advance

    Read the article

  • Visual Studio: How to attach a debugger dynamically to a specific process

    - by Jeff Cyr
    I am building an internal dev tool to manage different processes commonly used in our development environment. The tool show the list the monitored processes, indicate their running state and allow to start or stop each process. I'd like to add the functionality of attaching a debugger to a monitored process from my tool instead of going in 'Debug-Attach to process' in visual studio and finding the process. My goal is to have something like Debugger.Launch() that would show a list of the available visual studio. I can't use Debugger.Launch() because it lauches the debugger on the process that make the call. I would need something like Debugger.Launch(processId). Does anyone know how to acheive this functionality? A solution could be to implement a command in each monitored process to call Debugger.Launch() when the command is received from the monitoring tool, but I would prefer something that does not require to modify the code of the monitored processes. Side question: When using Debugger.Launch(), instances of Visual Studio that already have a debugger attached are not listed. Visual Studio is not limited to one attached debugger, you can attach on multiple process when using 'Debug - Attach to process'. Anyone know how to bypass this limitation when using Debugger.Launch() or an alternative?

    Read the article

  • Read from file hexadecimal number and change their representation style

    - by user576844
    I want to write a program changing the notation of all hexadecimal numbers found in an assembly source file from traditional (h) to C-style (0x). I have started the coding part but am not sure how can I detect the hexadecimal numbers and eventually change the style and save it back in the file... I have started writing the program.. ## Mips program - .data fin: .ascii "" # filename for input msg0: .asciiz "aaaa" msg1: .asciiz "Please enter the input file name:" buffer: .asciiz "" .text #----------------------- li $v0, 4 la $a0, msg1 syscall li $v0, 8 la $a0, fin li $a1, 21 syscall jal fileRead #read from file move $s1, $v0 #$t0 = total number of bytes li $t0, 0 # Loop counter loop: bge $t0, $s1, end #if end of file reached OR if there is an error in the file lb $t5, buffer($t0) #load next byte from file jal checkhexa #check for hexadecimal numbers addi $t0, $t0, 1 #increment loop counter j loop end: jal output jal fileClose li $v0, 10 syscall fileRead: # Open file for reading li $v0, 13 # system call for open file la $a0, fin # input file name li $a1, 0 # flag for reading li $a2, 0 # mode is ignored syscall # open a file move $s0, $v0 # save the file descriptor # reading from file just opened li $v0, 14 # system call for reading from file move $a0, $s0 # file descriptor la $a1, buffer # address of buffer from which to read li $a2, 100000 # hardcoded buffer length syscall # read from file jr $ra Any help would be appreciated.

    Read the article

  • Missing Intellisense While Describing Custom Control Properties Declaratively

    - by Albert Bori
    So, I've been working on this project for a few days now, and have been unable to resolve the issue of getting intellisense support for my custom-defined inner properties for a user control (ascx, mind you). I have seen the solution to this (using server controls, .cs mind you) many times. Spelled out in this article very well. Everything works for me while using ascx controls except intellisense. Here's the outline of my code: [PersistChildren(true)] [ParseChildren(typeof(BreadCrumbItem))] [ControlBuilder(typeof(BreadCrumbItem))] public partial class styledcontrols_buttons_BreadCrumb : System.Web.UI.UserControl { ... [PersistenceMode(PersistenceMode.InnerDefaultProperty)] public List<BreadCrumbItem> BreadCrumbItems { get { return _breadCrumbItems; } set { _breadCrumbItems = value; } } ... protected override void AddParsedSubObject(object obj) { base.AddParsedSubObject(obj); if (obj is BreadCrumbItem) BreadCrumbItems.Add(obj as BreadCrumbItem); } ... public class BreadCrumbItem : ControlBuilder { public string Text { get; set; } public string NavigateURL { get; set; } public override Type GetChildControlType(string tagName, System.Collections.IDictionary attribs) { if (String.Compare(tagName, "BreadCrumbItem", true) == 0) { return typeof(BreadCrumbItem); } return null; } } } Here's my mark up (which works fine, just no intellisense on the child object declarations): <%@ Register src="../styledcontrols/buttons/BreadCrumb.ascx" tagname="BreadCrumb" tagprefix="uc1" %> ... <uc1:BreadCrumb ID="BreadCrumb1" runat="server" BreadCrumbTitleText="Current Page"> <BreadCrumbItem Text="Home Page" NavigateURL="~/test/breadcrumbtest.aspx?iwentsomewhere=1" /> <BreadCrumbItem Text="Secondary Page" NavigateURL="~/test/breadcrumbtest.aspx?iwentsomewhere=1" /> </uc1:BreadCrumb> I think the issue lies with how the intellisense engine traverses supporting classes. All the working examples I see of this are not ascx, but Web Server Controls (cs, in a compiled assembly). If anyone could shed some light on how to accomplish this with ascx controls, I'd appreciate it.

    Read the article

  • Select box is not working properly after including google custom search box in web page

    - by Vinay
    I have got a select box and google custom search box in a page, when i choose a option from select box and navigate away from the page and again if i come back to the same page the option will not be selected (violates the default functionality of select box), The code is below <script src="https://www.google.com/jsapi" type="text/javascript"></script> <script type="text/javascript"> google.load('search', '1', {language : 'en'}); google.setOnLoadCallback(function() { var customSearchControl = new google.search.CustomSearchControl('004920913350056953771:kpkclvhujzk'); customSearchControl.setResultSetSize(google.search.Search.FILTERED_CSE_RESULTSET); var options = new google.search.DrawOptions(); options.setAutoComplete(true); options.enableSearchboxOnly("<?=$homeurl?>my_results.php", "query"); customSearchControl.draw('cse-search-form', options); }, true); </script> <select multiple="yes"> <option>1</option> <option>2</option> </select> If i remove the custom search script, the select box selected option will be retained even after navigating away from the page. (default functionality) Its working fine in chrome, IE but not in Firefox. Is there any solution for this, so that select box must work fine even in the presence of search box, but the order must be same 1) Search box 2)Select Box

    Read the article

  • PrinterSettings.IsValid always returning false

    - by Jarrod
    In our code, we have to give the users a list of printers to choose from. The user then chooses a printer and it is checked to verify it is valid before printing. On a windows 2003 server with IIS 6, this works fine. On a windows 2008 server with IIS 7, it fails each time impersonate is set to true. PrinterSettings printerSetting = new PrinterSettings(); printerSetting.PrinterName = ddlPrinterName.SelectedItem.Text; if (!printerSetting.IsValid) { lblMsg.Text = "Server Printer is not valid."; } else { lblMsg.Text = "Success"; } Each time this code is run, the "Server Printer is not valid" displays, only if impersonate is set to true. If impersonate is set to false, the success message is displayed. The impersonation user has full rights to the printer. Is there a way to catch the actual reason the printer is not valid? Is there some other 2008 setting I should check?

    Read the article

  • undefined references

    - by Brandon
    Hello, I'm trying to compile some fortran code and I'm running into some confusing linking errors. I have some code that I compile and place into a static library: >gfortran -c -I../../inc -o bdout.o bdout.F >ar rv libgeo.a bdout.o I then try to compile against that library with some simple test code and get the following: >gfortran -o mytest -L -lgeo mytest.F /tmp/cc4uvcsj.o: In function `MAIN__': mytest.F:(.text+0xb0): undefined reference to `ncwrite1_' collect2: ld returned 1 exit status It's not in the object naming because everything looks fine: >nm -u libgeo.a bdout.o: U _gfortran_exit_i4 U _gfortran_st_write U _gfortran_st_write_done U _gfortran_transfer_character U _gfortran_transfer_integer U ncobjcl_ U ncobjwrp_ U ncopencr_ U ncopenshcr_ U ncopenwr_ U ncwrite1_ U ncwrite2_ U ncwrite3_ U ncwrite4_ U ncwritev_ I can check the original object file too: >nm -u bdout.o U _gfortran_exit_i4 U _gfortran_st_write U _gfortran_st_write_done U _gfortran_transfer_character U _gfortran_transfer_integer U ncobjcl_ U ncobjwrp_ U ncopencr_ U ncopenshcr_ U ncopenwr_ U ncwrite1_ U ncwrite2_ U ncwrite3_ U ncwrite4_ U ncwritev_ The test code simply contains a single call to a function defined in bdout.o: program hello print *,"Hello World!" call ncwrite1( istat, f, ix2, ix3, ix4, ix5, ih ) end program hello I can't figure out what the problem is. Does anyone have any suggestions? Maybe even just a way to track the problem down? Cheers.

    Read the article

  • Altering lazy-loaded object's private variables

    - by Kevin Pang
    I'm running into an issue with private setters when using NHibernate and lazy-loading. Let's say I have a class that looks like this: public class User { public int Foo {get; private set;} public IList<User> Friends {get; set;} public void SetFirstFriendsFoo() { // This line works in a unit test but does nothing during a live run with // a lazy-loaded Friends list Users(0).Foo = 1; } } The SetFirstFriendsFoo call works perfectly inside a unit test (as it should since objects of the same type can access each others private properties). However, when running live with a lazy-loaded Friends list, the SetFirstFriendsFoo call silently fails. I'm guessing the reason for this is because at run-time, the Users(0).Foo object is no longer of type User, but of a proxy class that inherits from User since the Friends list was lazy-loaded. My question is this: shouldn't this generate a run-time exception? You get compile-time exceptions if you try to access another class's private properties, but when you run into a situation like this is looks like the app just ignores you and continues along its way.

    Read the article

  • Need some ignition for learning Embedded Systems

    - by Rahul
    I'm very much interested in building applications for Embedded Devices. I'm in my 3rd year Electrical Engineering and I'm passionate about coding, algorithms, Linux OS, etc. And also by Googling I found out that Linux OS is one of the best OSes for Embedded devices(may be/may not be). I want to work for companies which work on mobile applications. I'm a newbie/naive to this domain & my skills include C/C++ & MySQL. I need help to get started in the domain of Embedded Systems; like how/where to start off, Hardware prerequisites, necessary programming skills, also what kind of Embedded Applications etc. I've heard of ARM, firmware, PIC Micorcontrollers; but I don't know anything & just need proper introduction about them. Thanx. P.S: I'm currently reading Bjarne Struotsup's lecture in C++ at Texas A&M University, and one chapter in it describes about Embedded Systems Programming.

    Read the article

  • How do I efficiently write a "toggle database value" function in AJAX?

    - by AmbroseChapel
    Say I have a website which shows the user ten images and asks them to categorise each image by clicking on buttons. A button for "funny", a button for "scary", a button for "pretty" and so on. These buttons aren't exclusive. A picture can be both funny and scary. The user clicks the "funny" button. An AJAX request is sent off to the database to mark that image as funny. The "funny" button lights up, by assigning a class in the DOM to mark it as "on". But the user made a mistake. They meant to hit the next button over. They should click "funny" again to turn it off, right? At this point I'm not sure whats the most efficient way to proceed. The database knows that the "funny" flag is set, but it's inefficient to query the database every time a button is clicked to say, is this flag set or not, then go on with a second database call to toggle it. Should I infer the state of the database flag from the DOM, i.e. if that button has the class "on" then the flag must be set, and branch at that point? Or would it be better to have a data structure in Javascript in the page which duplicates the state of each image in the database, so that every time I set the database flag to true, I also set the value in the Javascript data to true and so on?

    Read the article

  • Rails emails not sending in staging environment

    - by jrdioko
    In a Rails application I set up a new staging environment with the following parameters in its environments/ file: config.action_mailer.perform_deliveries = true config.action_mailer.raise_delivery_errors = true config.action_mailer.delivery_method = :smtp However, when the system generates an email, it gets printed to the staging.log file instead of being sent. My SMTP settings work fine in other environments. What configuration am I missing to get the emails to actually send? Edit: Yes, the staging box is set up with valid configuration for an SMTP server it has access to. It seems like the problem isn't with the SMTP settings (if it was, wouldn't I get errors in the logs?), but with the Rails configuration. The application is still redirecting emails to the log file (saying "Sent mail: ...") as opposed to actually going through SMTP. Edit #2: It looks like the emails actually have been sending correctly, they just happen to print to the log as well. I'm trying to use the sanitize_email gem to redirect the mail to another address, and that doesn't seem to be working, which is why I thought the emails weren't going out. So I think that solves my problem, although I'm still curious what in ActionMailer's settings controls whether emails are sent, logged to the log file, or both. Edit #3: The problem with sanitize_email boiled down to me needing to add the new staging environment to ActionMailer::Base.local_environments. I'll keep this question open to see if anyone can answer my last question (what determines whether ActionMailer's emails get sent out, logged to the log file, or both?)

    Read the article

  • Rubygems on Debian: Gems won't load (LoadError)

    - by daswerth
    I've installed the development version of Crunchbang, a linux distro based off Debian. I got Ruby and Rubygems installed, but I can't get the gems I've installed to load. Here is a command-line session: $ ruby -v ruby 1.9.1p378 (2010-01-10 revision 26273) [i486-linux] $ gem env RubyGems Environment: - RUBYGEMS VERSION: 1.3.6 - RUBY VERSION: 1.9.1 (2010-01-10 patchlevel 378) [i486-linux] - INSTALLATION DIRECTORY: /usr/lib/ruby1.9.1/gems/1.9.1 - RUBY EXECUTABLE: /usr/bin/ruby1.9.1 - EXECUTABLE DIRECTORY: /usr/bin - RUBYGEMS PLATFORMS: - ruby - x86-linux - GEM PATHS: - /usr/lib/ruby1.9.1/gems/1.9.1 - /home/corey/.gem/ruby/1.9.1 - GEM CONFIGURATION: - :update_sources => true - :verbose => true - :benchmark => false - :backtrace => false - :bulk_threshold => 1000 - REMOTE SOURCES: - http://rubygems.org/ $ echo $PATH /home/corey/bin:/usr/local/bin:/usr/bin:/bin:/usr/bin/X11:/usr/games:/home/corey/.gem/ruby/1.9.1:/usr/lib/ruby1.9.1/gems/1.9.1 $ gem list -d nokogiri `*** LOCAL GEMS ***` nokogiri (1.4.1) Authors: Aaron Patterson, Mike Dalessio Rubyforge: http://rubyforge.org/projects/nokogiri Homepage: http://nokogiri.org Installed at: /usr/lib/ruby1.9.1/gems/1.9.1 Nokogiri (?) is an HTML, XML, SAX, and Reader parser $ ruby -r rubygems -e "require 'nokogiri'" -e:1:in `require': no such file to load -- nokogiri (LoadError) from -e:1:in `' I've encountered similar problems on Ubuntu before, but they were easy to fix. I can't figure out what's wrong in this particular case, and Google didn't seem to know either. Any help would be greatly appreciated! By the way... this is my first submission to stackoverflow. I hope this question is relevant. :)

    Read the article

  • In languages which create a new scope each time in a loop block, a new local copy of the local loop

    - by Jian Lin
    It seems that in language like C, Java, and Ruby (as opposed to Javascript), a new scope is created for each iteration of a loop block, and the local variable defined for the loop is actually made into a local variable every single time and recorded in this new scope? For example, in Ruby: p RUBY_VERSION $foo = [] (1..5).each do |i| $foo[i] = lambda { p i } end (1..5).each do |j| $foo[j].call() end the print out is: [MacBook01:~] $ ruby scope.rb "1.8.6" 1 2 3 4 5 [MacBook01:~] $ So, it looks like when a new scope is created, a new local copy of i is also created and recorded in this new scope, so that when the function is executed at a later time, the "i" is found in those scope chains as 1, 2, 3, 4, 5 respectively. Is this true? (It sounds like a heavy operation). Contrast that with p RUBY_VERSION $foo = [] i = 0 (1..5).each do |i| $foo[i] = lambda { p i } end (1..5).each do |j| $foo[j].call() end This time, the i is defined before entering the loop, so Ruby 1.8.6 will not put this i in the new scope created for the loop block, and therefore when the i is looked up in the scope chain, it always refer to the i that was in the outside scope, and give 5 every time: [MacBook01:~] $ ruby scope2.rb "1.8.6" 5 5 5 5 5 [MacBook01:~] $ I heard that in Ruby 1.9, i will be treated as a local defined for the loop even when there is an i defined earlier? The operation of creating a new scope, creating a new local copy of i each time through the loop seems heavy, as it seems it wouldn't have matter if we are not invoking the functions at a later time. So when the functions don't need to be invoked at a later time, could the interpreter and the compiler to C / Java try to optimize it so that there is not local copy of i each time?

    Read the article

< Previous Page | 548 549 550 551 552 553 554 555 556 557 558 559  | Next Page >