Search Results

Search found 18096 results on 724 pages for 'let'.

Page 563/724 | < Previous Page | 559 560 561 562 563 564 565 566 567 568 569 570  | Next Page >

  • How do I read text from a serial port?

    - by user2164
    I am trying to read data off of a Windows serial port through Java. I have the javax.comm libraries and am able to get some data but not correct data. When I read the port into a byte array and convert it to text I get a series of characters but no real text string. I have tried to specify the byte array as being both "UTF-8" and "US-ASCII". Does anyone know how to get real text out of this? Here is my code: while (inputStream.available() > 0) { int numBytes = inputStream.read(readBuffer); System.out.println("Reading from " + portId.getName() + ": "); System.out.println("Read " + numBytes + " bytes"); } System.out.println(new String(readBuffer)); System.out.println(new String(readBuffer, "UTF-8")); System.out.println(new String(readBuffer, "US-ASCII")); the output of the first three lines will not let me copy and paste (I assume because they are not normal characters). Here is the output of the Hex: 78786000e67e9e60061e8606781e66e0869e98e086f89898861878809e1e9880 I am reading from a Hollux GPS device which does output in string format. I know this for sure because I did it through C#. The settings that I am using for communication which I know are right from the work in the C# app are: Baud Rate: 9600 Databits: 8 Stop bit: 1 parity: none

    Read the article

  • Maintaining a pool of DAO Class instances vs doing new operator

    - by Fazal
    we have been trying to benchmark our application performance in multiple way for sometime now. I always believed that object creation in java using Class.newInstance() was not slow (at least after 1.4 version of java). But we anyways did a test to use newInstance method vs mainitain an object pool of 1000 objects. We did about 200K iterations of loading data from DB using JDBC and populating these objects. I was amazed (even shocked) to see that newInstance code compared to object pool code was almost 10 times slower. These objects represent tables with about 50 fields and all string type. Can someone share there thoughts on this issue as now I am more confused if object pooling of atleast some DAO instances is a better option. The pool size as I see right now should be large enough to meet size of average requests. There is a flip side as my memory footprint will go up but I am beginning to wonder if this kind of idea makes sense atleast for some of the DAO entities representing tables of about 50 or more columns Please share your ideas and let me know if this has been tried by someone or am I missing some point here

    Read the article

  • VS2010 : javascript intellisense : specifying properties for 'options' objects passed to methods

    - by Master Morality
    Since javascript intellisense actually seems to work in VS2010, I thought I might add some to those scripts I include in almost everything. The trouble is, on some complex functions, I use option objects instead of passing umpteen different parameters, like so: function myFunc(options){ var myVar1 = options.myVar1, myVar2 = options.myVar2, myVar3 = options.myVar3; ... } the trouble I am running into is, there doesn't seem to be a way to specify what properties options needs to have. I've tried this: function myFunc(options){ ///<summary>my func does stuff...</summary> ///<param name="options"> ///myVar1 : the first var ///myVar2 : the second var ///myVar3 : the third var ///</param> var myVar1 = options.myVar1, myVar2 = options.myVar2, myVar3 = options.myVar3; ... } but the line breaks are removed and all the property comments run together, making them stupidly hard to read. I've tried the <para> tags, but to no avail. If anyone has any ideas on how I might achieve this, please let me know. -Brandon

    Read the article

  • R: outlier cleaning for each column in a dataframe by using quantiles 0.05 and 0.95

    - by Rainer
    hi, I am a R-novice. I want to do some outlier cleaning and over-all-scaling from 0 to 1 before putting the sample into a random forest. g<-c(1000,60,50,60,50,40,50,60,70,60,40,70,50,60,50,70,10) If i do a simple scaling from 0 - 1 the result would be: > round((g - min(g))/abs(max(g) - min(g)),1) [1] 1.0 0.1 0.0 0.1 0.0 0.0 0.0 0.1 0.1 0.1 0.0 0.1 0.0 0.1 0.0 0.1 0.0 So my idea is to replace the values of each column that are greater than the 0.95-quantile with the next value smaller than the 0.95-quantile - and the same for the 0.05-quantile. So the pre-scaled result would be: g<-c(**70**,60,50,60,50,40,50,60,70,60,40,70,50,60,50,70,**40**) and scaled: > round((g - min(g))/abs(max(g) - min(g)),1) [1] 1.0 0.7 0.3 0.7 0.3 0.0 0.3 0.7 1.0 0.7 0.0 1.0 0.3 0.7 0.3 1.0 0.0 I need this formula for a whole dataframe, so the functional implementation within R should be something like: > apply(c, 2, function(x) x[x`<quantile(x, 0.95)]`<-max(x[x, ... max without the quantile(x, 0.95)) Can anyone help? Spoken beside: if there exists a function that does this job directly, please let me know. I already checked out cut and cut2. cut fails because of not-unique breaks; cut2 would work, but only gives back string values or the mean value, and I need a numeric vector from 0 - 1. for trial: a<-c(100,6,5,6,5,4,5,6,7,6,4,7,5,6,5,7,1) b<-c(1000,60,50,60,50,40,50,60,70,60,40,70,50,60,50,70,10) c<-cbind(a,b) c<-as.data.frame(c) Regards and thanks for help, Rainer

    Read the article

  • Flash/Flex sending XML to Rails App

    - by bdicasa
    I'm trying to send some XML to a rails app in Flex. I'm using the URLRequest and URLLoader objects. However, I'm having trouble determining how to send the XML and _method parameter to the rails app using these flash objects. Below is how I'm currently trying to achieve this. var request:URLRequest = new URLRequest(); request.method = URLRequestMethod.POST; request.data = new Object(); request.data.xml = Blog.xml.toXMLString(); request.contentType = "text/xml"; var loader:URLLoader = new URLLoader(); loader.addEventListener(Event.COMPLETE, saveCompleteHandler); var saveUrl:String = ""; saveUrl = BASE_URL; if (Blog.isNewBlog) { // Set the rails REST method. request.data._method = "POST"; saveUrl += "blogs.xml"; } else { // Set the rails REST method. request.data._method = "PUT"; saveUrl += "blogs/" + Blog.id.toString() + ".xml"; } request.url = saveUrl; //trace(request.data.toString()); loader.load(request); However the only data that is getting sent to the server is [Object object]. If some one could let me know where I'm going wrong I'd greatly appreciate it. Thanks.

    Read the article

  • How to nest joins with CakePHP?

    - by Daren Thomas
    I'm trying to behave. So, instead of using following SQL syntax: select * from tableA INNER JOIN tableB on tableA.id = tableB.tableA_id LEFT OUTER JOIN ( tableC INNER JOIN tableD on tableC.tableD_id = tableD.id) on tableC.tableA_id = tableA.id I'd like to use the CakePHP model->find(). This will let me use the Paginator too, since that will not work with custom SQL queries as far as I understand (unless you hardcode one single pagination query to the model which seems a little inflexible to me). What I've tried so far: /* inside tableA_controller.php, inside an action, e.g. "view" */ $this->paginate['recursive'] = -1; # suppress model associations for now $this->paginate['joins'] = array( array( 'table' => 'tableB', 'alias' => 'TableB', 'type' => 'inner', 'conditions' => 'TableB.tableA_id = TableA.id', ), array( 'table' => 'tableC', 'alias' => 'TableC', 'type' => 'left', 'conditions' => 'TableC.tableA_id = TableA.id', 'joins' = array( # this would be the obvious way to do it, but doesn't work array( 'table' => 'tableD', 'alias' => 'TableD', 'type' => 'inner', 'conditions' => 'TableC.tableD_id = TableD.id' ) ) ) ) That is, nesting the joins into the structure. But that doesn't work (CakePHP just ignores the nested 'joins' element which was kind of what I expected, but sad. I have seen hints in comments on how to do subqueries (in the where clause) using a statement builder. Can a similar trick be used here?

    Read the article

  • TFS Solution build cascading to several other builds even when common components were not modified

    - by Bob Palmer
    Hey all, here is the issue I am currently trying to work through. We are using Team Foundation Server 2008, and utilizing the automated build support out of the box. We have one very large project that encompasses a number of interrelated components and web sites, each of which is set up as a Visual Studio solution file. Many of these solutions are highly interrelated since they may contain applications, or contain common libraries or shared components. We have roughly 20 or so applications, three large web sites, and about 20 components. Each solution may include projects from other solutions. For example, a solution for a console app would also include the project files for all of the components it utilizes, since we need to ensure that when someone changes a component and rebuilds it, it is reflected in all of the projects that consume that component, and we can make sure nothing was broken. We have build projects for each solution, whether that's an application, component, or web site. For this example, we will call them solutions 01, 02, and 03. These reference multiple projects (both their own core project and test projects, plus the projects relating to various components). Solution 01 has projects A, B, and C. Solution 02 has projects C, D, and E. Solution 03 has projects E, F, and G. Now, for the problem. If I modify project A, the system will end up rebuilding all three solutions. Worse, all thirty solutions reference common projects used for data access (let's call it project H). Because they all share one project in common, if I modify any solution in my stack, even if it does not touch project H, I still end up kicking off every single build script. Any thoughts on how to address this? Ideally I would only want to kick off builds where their constituant projects were directly modified - i.e in the example below, if I modified project C, I would only rebuild solutions 01 and 02. Thanks!

    Read the article

  • Maximizing the number of true concurrent / parrallel http requests in Silverlight

    - by Clems
    Hi all. I'm using SL 4 beta and my app needs to do a lot of small http requests to the server. I believe that when exceeding the number of allowed concurrent requests, the subsequent requests are put in a queue. I am also aware that SL 4 has both a http browser stack and a http client stack, with both different limit in terms of the number of concurrent requests. Let's say call those limits MAX_BROWSER and MAX_CLIENT. Also I think I read somewhere that the number of concurrent requests is limited per domain, not overall. But I'm sure if this applies to both the http client stack. That means that you CAN have MAX_BROWSER requests to domain1.com AND MAX_BROWSER requests to domain2.com at the same time. And I even believe that sub domains are considered different so you can also have MAX_BROWSER requests to domain1.com AND MAX_BROWSER requests to sub.domain1.com at the same time. I have ownership of the services and domain names so I could easily setup sub domains for my services. Given those considerations I'm trying to optimize the number of concurrent http requests to my server. Here are few questions ? Is is possible to use both stack at the same time ? Is the subdomain/domain story true for both stacks ? None ? If so that would mean that I could potentially have a number of concurrent requests equal to : (MAX_BROWSER + MAX_CLIENT) * NUMBER_OF_DOMAINS which would be fairly good. Is this correct ? I'm kind of sharing my morning thoughts here, hoping somebody has experimented with those things. Thank you.

    Read the article

  • Using Sub-Types And Return Types in Scala to Process a Generic Object Into a Specific One

    - by pr1001
    I think this is about covariance but I'm weak on the topic... I have a generic Event class used for things like database persistance, let's say like this: class Event( subject: Long, verb: String, directobject: Option[Long], indirectobject: Option[Long], timestamp: Long) { def getSubject = subject def getVerb = verb def getDirectObject = directobject def getIndirectObject = indirectobject def getTimestamp = timestamp } However, I have lots of different event verbs and I want to use pattern matching and such with these different event types, so I will create some corresponding case classes: trait EventCC case class Login(user: Long, timestamp: Long) extends EventCC case class Follow( follower: Long, followee: Long, timestamp: Long ) extends EventCC Now, the question is, how can I easily convert generic Events to the specific case classes. This is my first stab at it: def event2CC[T <: EventCC](event: Event): T = event.getVerb match { case "login" => Login(event.getSubject, event.getTimestamp) case "follow" => Follow( event.getSubject, event.getDirectObject.getOrElse(0), event.getTimestamp ) // ... } Unfortunately, this is wrong. <console>:11: error: type mismatch; found : Login required: T case "login" => Login(event.getSubject, event.getTimestamp) ^ <console>:12: error: type mismatch; found : Follow required: T case "follow" => Follow(event.getSubject, event.getDirectObject.getOrElse(0), event.getTimestamp) Could someone with greater type-fu than me explain if, 1) if what I want to do is possible (or reasonable, for that matter), and 2) if so, how to fix event2CC. Thanks!

    Read the article

  • binding nested json object value to a form field

    - by Jack
    I am building a dynamic form to edit data in a json object. First, if something like this exists let me know. I would rather not build it but I have searched many times for a tool and have found only tree like structures that require entering quotes. I would be happy to treat all values as strings. This edit functionality is for end users so it needs to be easy an not intimidating. So far I have code that generates nested tables to represent a json object. For each value I display a form field. I would like to bind the form field to the associated nested json value. If I could store a reference to the json value I would build an array of references to each value in a json object tree. I have not found a way to do that with javascript. My last resort approach will be to traverse the table after edits are made. I would rather have dynamic updates but a single submit would be better than nothing. Any ideas? // the json in files nests only a few levels. Here is the format of a simple case, { "researcherid_id":{ "id_key":"researcherid_id", "description":"Use to retrieve bibliometric data", "url_template" :[ { "name": "Author Detail", "url": "http://www.researcherid.com/rid/${key}" } ] } } $.get('file.json',make_json_form); function make_json_form(response) { dataset = $.secureEvalJSON(response); // iterate through the object and generate form field for string values. } // Then after the form is edited I want to display the raw updated json (then I want to save it but that is for another thread) // now I iterate through the form and construct the json object // I would rather have the dataset object var updated on focus out after each edit. function show_json(form_id){ var r = {}; var el = document.getElementById(form_id); table_to_json(r,el,null); $('body').html(formattedJSON(r)); }

    Read the article

  • How to get DIVs into this code via JQuery

    - by ludz
    Heya everyone I been struggling along with this piece of code for the longest time, its driving me insane. I am trying many different things and looking at past posts here but nothing seems to be helping. Basicly i have a jquery pagination code in place and i want to add animated transitions between pages. With some assistance i got that working correctly however this causes the page to jump around as the new items fade in and out. To fix this i need a DIV wrapped around each selection of results. I have been trying to use .wrap .html .wrapinner .append and i cant get any of it to work properly. The 2 areas where i beleive the code needs to be place are as follows: $('#content').children().slice(0, show_per_page).css('display', 'block'); and $('#content').children().fadeOut('slow').slice(start_from, end_on).fadeIn('slow'); Full original code: http://tutsvalley.com/tutorials/making-a-jquery-pagination-system/ Only the second line of code posted here has been altered. Basicly i want to wrap each group of sliced output in a DIV. I hope that makes sense, if you need anymore information please let me know. Any ideas or suggestions on what to try it would be much apreciated as its currently driving me crazy :) Ludz~

    Read the article

  • MKL Accelerated Math Libraries for Java...

    - by Kaopua
    I've looked at the related threads on StackOverflow and Googled with not much luck. I'm also very new to Java (I'm coming from a C# and .NET background) so please bear with me. There is so much available in the Java world it's pretty overwhelming. I'm starting on a new Java-on-Linux project that requires some heavy and highly repetitious numerical calculations (i.e. statistics, FFT, Linear Algebra, Matrices, etc.). So maximizing the performance of the mathematical operations is a requirement, as is ensuring the math is correct. So hence I have an interest in finding a Java library that perhaps leverages native acceleration such as MKL, and is proven (so commercial options are definitely a possibility here). In the .NET space there are highly optimized and MKL accelerated commercial Mathematical libraries such as Centerspace NMath and Extreme Optimization. Is there anything comparable in Java? Most of the math libraries I have found for Java either do not seem to be actively maintained (such as Colt) or do not appear to leverage MKL or other native acceleration (such as Apache Commons Math). I have considered trying to leverage MKL directly from Java myself (e.g. JNI), but me being new to Java (let alone interoperating between Java and native libraries) it seemed smarter finding a Java library that has already done this correctly, efficiently, and is proven. Again I apologize if I am mistaken or misguided (even in regarding any libraries I've mentioned) and my ignorance of the Java offerings. It's a whole new world for me coming from the heavily commercialized Microsoft stock so I could easily be mistaken on where to look and regarding the Java libraries I've mentioned. I would greatly appreciate any help or advice.

    Read the article

  • Escape Quote in C# for javascript consumption

    - by Jason
    I have a ASP.Net web handler that returns results of a query in JSON format public static String dt2JSON(DataTable dt) { String s = "{\"rows\":["; if (dt.Rows.Count > 0) { foreach (DataRow dr in dt.Rows) { s += "{"; for (int i = 0; i < dr.Table.Columns.Count; i++) { s += "\"" + dr.Table.Columns[i].ToString() + "\":\"" + dr[i].ToString() + "\","; } s = s.Remove(s.Length - 1, 1); s += "},"; } s = s.Remove(s.Length - 1, 1); } s += "]}"; return s; } The problem is that sometimes the data returned has quotes in it and I would need to javascript-escape these so that it can be properly created into a js object. I need a way to find quotes in my data (quotes aren't there every time) and place a "/" character in front of them. Example response text (wrong): {"rows":[{"id":"ABC123","length":"5""}, {"id":"DEF456","length":"1.35""}, {"id":"HIJ789","length":"36.25""}]} I would need to escape the " so my response should be: {"rows":[{"id":"ABC123","length":"5\""}, {"id":"DEF456","length":"1.35\""}, {"id":"HIJ789","length":"36.25\""}]} Also, I'm pretty new to C# (coding in general really) so if something else in my code looks silly let me know.

    Read the article

  • Browser dependent problem rendering WMD with Showdown.js?

    - by CMPalmer
    This should be easy (at least no one else seems to be having a similar problem), but I can't see where it is breaking. I'm storing Markdown'ed text in a database that is entered on a page in my app. The text is entered using WMD and the live preview looks correct. On another page, I'm retrieving the markdown text and using Showdown.js to convert it back to HTML client-side for display. Let's say I have this text: The quick **brown** fox jumped over the *lazy* dogs. 1. one 1. two 4. three 17. four I'm using this snippet of Javascript in my jQuery document ready event to convert it: var sd = new Attacklab.showdown.converter(); $(".ClassOfThingsIWantConverted").each(function() { this.innerHTML = sd.makeHtml($(this).html()); } I suspect this is where my problem is, but it almost works. In FireFox, I get what I expected: The quick brown fox jumped over the lazy dogs. one two three four But in IE (7 and 6), I get this: The quick brown fox jumped over the lazy dogs. 1. one 1. two 4. three 17. four So apparently, IE is stripping the breaks in my markdown code and just converting them to spaces. When I do a view source of the original code (prior to the script running), the breaks are there inside the container DIV. What am I doing wrong? UPDATE It is caused by the IE innerHTML/innerText "quirk" and I should have mentioned before that this one on an ASP.Net page using data bound controls - there are obviously a lot of different workarounds otherwise.

    Read the article

  • What's the equivalent of gcc's -mwindows option in cmake?

    - by Runner
    I'm following the tuto: http://zetcode.com/tutorials/gtktutorial/firstprograms/ It works but each time I double click on the executable,there is a console which I don't want it there. How do I get rid of that console? I tried this: add_executable(Cmd WIN32 cmd.c) But got this fatal error: MSVCRTD.lib(crtexew.obj) : error LNK2019: unresolved external symbol _WinMain@16 referenced in function ___tmainCRTStartup Cmd.exe : fatal error LNK1120: 1 unresolved externals While using gcc directly works: gcc -o Cmd cmd.c -mwindows .. I'm guessing it has something to do with the entry function: int main( int argc, char *argv[]),but why gcc works? How can I make it work with cmake? UPDATE Let me paste the source code here for convenience: #include <gtk/gtk.h> int main( int argc, char *argv[]) { GtkWidget *window; gtk_init(&argc, &argv); window = gtk_window_new(GTK_WINDOW_TOPLEVEL); gtk_widget_show(window); gtk_main(); return 0; } UPDATE2 Why gcc -mwindows works but add_executable(Cmd WIN32 cmd.c) not? Maybe that's not the equivalent for -mwindows in cmake?

    Read the article

  • ploting 3d graph in matlab?

    - by lytheone
    Hello, I am currently a begineer, and i am using matlab to do a data analysis. I have a a text file with data at the first row is formatted as follow: time;wave height 1;wave height 2;....... I have column until wave height 19 and rows total 4000 rows. Data in the first column is time in second. From 2nd column onwards, it is wave height elevation which is in meter. At the moment I like to ask matlab to plot a 3d graph with time on the x axis, wave elevation on the y axis, and wave elevation that correspond to wave height number from 1 to 19, i.e. data in column 2 row 10 has a let say 8m which is correspond to wave height 1 and time at the column 1 row 10. I have try the following: clear; filename='abc.daf'; path='C:\D'; a=dlmread([path '\' filename],' ', 2, 1); [nrows,ncols]=size(a); t=a(1:nrows,1);%define t from text file for i=(1:20), j=(2:21); end wi=a(:,j); for k=(2:4000), l=k; end r=a(l,:); But everytime i use try to plot them, the for loop wi works fine, but for r=a(l,:);, the plot only either give me the last time data only but i want all data in the file to be plot. Is there a way i can do that. I am sorry as it is a bit confusing but i will be very thankful if anyone can help me out. Thank you!!!!!!!!!!

    Read the article

  • Setting Connection Parameters via ADO for SQL Server

    - by taspeotis
    Is it possible to set a connection parameter on a connection to SQL Server and have that variable persist throughout the life of the connection? The parameter must be usable by subsequent queries. We have some old Access reports that use a handful of VBScript functions in the SQL queries (let's call them GetStartDate and GetEndDate) that return global variables. Our application would set these before invoking the query and then the queries can return information between date ranges specified in our application. We are looking at changing to a ReportViewer control running in local mode, but I don't see any convenient way to use these custom functions in straight T-SQL. I have two concept solutions (not tested yet), but I would like to know if there is a better way. Below is some pseudo code. Set all variables before running Recordset.OpenForward Connection->Execute("SET @GetStartDate = ..."); Connection->Execute("SET @GetEndDate = ..."); // Repeat for all parameters Will these variables persist to later calls of Recordset->OpenForward? Can anything reset the variables aside from another SET/SELECT @variable statement? Create an ADOCommand "factory" that automatically adds parameters to each ADOCommand object I will use to execute SQL // Command has been previously been created ADOParameter *Parameter1 = Command->CreateParameter("GetStartDate"); ADOParameter *Parameter2 = Command->CreateParameter("GetEndDate"); // Set values and attach etc... What I would like to know if there is something like: Connection->SetParameter("GetStartDate", "20090101"); Connection->SetParameter("GetEndDate", 20100101"); And these will persist for the lifetime of the connection, and the SQL can do something like @GetStartDate to access them. This may be exactly solution #1, if the variables persist throughout the lifetime of the connection.

    Read the article

  • Setting Connection Parameters via ADO for MSSQL

    - by taspeotis
    Is it possible to set a connection parameter on a connection to SQL Server and have that variable persist throughout the life of the connection? The parameter must be usable by subsequent queries. We have some old Access reports that use a handful of VBScript functions in the SQL queries (let's call them GetStartDate and GetEndDate) that return global variables. Our application would set these before invoking the query and then the queries can return information between date ranges specified in our application. We are looking at changing to a ReportViewer control running in local mode, but I don't see any convenient way to use these custom functions in straight T-SQL. I have two concept solutions (not tested yet), but I would like to know if there is a better way. Below is some psuedo code. Set all variables before running Recordset.OpenForward Connection->Execute("SET @GetStartDate = ..."); Connection->Execute("SET @GetEndDate = ..."); // Repeat for all parameters Will these variables persist to later calls of Recordset->OpenForward? Can anything reset the variables aside from another SET/SELECT @variable statement? Create an ADOCommand "factory" that automatically adds parameters to each ADOCommand object I will use to execute SQL // Command has been previously been created ADOParameter *Parameter1 = Command->CreateParameter("GetStartDate"); ADOParameter *Parameter2 = Command->CreateParameter("GetEndDate"); // Set values and attach etc... What I would like to know if there is something like: Connection->SetParameter("GetStartDate", "20090101"); Connection->SetParameter("GetEndDate", 20100101"); And these will persist for the lifetime of the connection, and the SQL can do something like @GetStartDate to access them. This may be exactly solution #1, if the variables persist throughout the lifetime of the connection.

    Read the article

  • Creating an API for an ASP.NET MVC site with rate-limiting and caching

    - by Maxim Z.
    Recently, I've been very interested in APIs, specifically in how to create them. For the purpose of this question, let's say that I have created an ASP.NET MVC site that has some data on it; I want to create an API for this site. I have multiple questions about this: What type of API should I create? I know that REST and oData APIs are very popular. What are the pros and cons of each, and how do I implement them? From what I understand so far, REST APIs with ASP.NET MVC would just be actions that return JSON instead of Views, and oData APIs are documented here. How do I handle writing? Reading from both API types is quite simple. However, writing is more complex. With the REST approach, I understand that I can use HTTP POST, but how do I implement authentication? Also, with oData, how does writing work in the first place? How do I implement basic rate-limiting and caching? From my past experience with APIs, these are very important things, so that the API server isn't overloaded. What's the best way to set these two things up? Can I get some sample code? Any code that relates to C# and ASP.NET MVC would be appreciated. Thanks in advance! While this is a broad question, I think it's not too broad... :) There are some similar questions to this one that are about APIs, but I haven't found any that directly address the questions I outlined here.

    Read the article

  • Colorbox does not refresh its contents when I change URL

    - by Josef Sábl
    I am trying to make use of Colorbox on my webpage. One specific feature does not work as expected, though. In their example (Outside Webpage - Iframe specifically) Google appears in the Colorbox when clicked. When you change href on that link (Using Firebug or JavaScript, e.g.) to, let's say, Yahoo, it works as expected and Yahoo is displayed after click. But not in my case. Once I click the link, I have no way to change the URL. Href, as displayed in browser, changes (to Yahoo), but click always opens the first page (Google). What might be the problem? I almost copy-pasted their example: <script src="/lib/jquery.js"></script> <script src="/lib/jquery.colorbox.js"></script> <p><a class='example7' href="http://yahoo.com">Outside Webpage (Iframe)</a></p> <script> $(document).ready(function(){ $(".example7").colorbox({width:"80%", height:"80%", iframe:true}); }); </script> My jquery version is 1.4.1 but I also tried same version as their example uses (1.3.2)

    Read the article

  • Exporting dataset to Excel file with multiple sheets in ASP.NET

    - by engg
    In C# ASP.NET 3.5 web application, I need to export multiple datatables (or a dataset) to an Excel 2007 file with multiple sheets, and then provide the user with 'Open/Save' dialog box, WITHOUT saving the Excel file on the web server. I have used Excel Interop before. I have been reading that it's not efficient and is not the best approach to achieve this and there are more ways to do it, 2 of them being: 1) Converting data in datatables to an XML string that Excel understands 2) Using OPEN XML SDK 2.0. It looks like OPEN XML SDK 2.0 is better, please let me know. Are there any other ways to do it? I don't want to use any third-party tools. If I use OPEN XML SDK, it creates an excel file, right? I don't want to save it on the (Windows 2003) server hard drive (I don't want to use Server.MapPath, these Excel files are dynamically created, and they are not required on the server, once client gets them). I directly want to prompt the user to open/save it. I know how to do it when the 'XML string' approach is used. Please help. Thank you.

    Read the article

  • vb.net documentation and exception question

    - by dcp
    Let's say I have this sub in vb.net: ''' <summary> ''' Validates that <paramref name="value"/> is not <c>null</c>. ''' </summary> ''' ''' <param name="value">The object to validate.</param> ''' ''' <param name="name">The variable name of the object.</param> ''' ''' <exception cref="ArgumentNullException">If <paramref name="value"/> is <c>null</c>.</exception> Sub ValidateNotNull(ByVal value As Object, ByVal name As String) If value Is Nothing Then Throw New ArgumentNullException(name, String.Format("{0} cannot be null.", name)) End If End Sub My question is, is it proper to call this ValidateNotNull (which is what I would call it in C#) or should I stick with VB terminology and call it ValidateNotNothing instead? Also, in my exception, is it proper to say "cannot be null", or would it be better to say "cannot be Nothing"? I sort of like the way I have it, but since this is VB, maybe I should use Nothing. But since the exception itself is called ArgumentNullException, it feels weird to make the message say "cannot be Nothing". Anyway, I guess it's pretty nitkpicky, just wondered what you folks thought.

    Read the article

  • Objective-C: How to access methods in other classes

    - by Adam
    I have what I know is a simple question, but after many searches in books and on the Internet, I can't seem to come up with a solution. I have a standard iPhone project that contains, among other things, a ViewController. My app works just fine at this point. I now want to create a generic class (extending NSObject) that will have some basic utility methods. Let's call this class Util.m (along with the associated .h file). I create the Util class (and .h file) in my project, and now I want to access the methods contained in that class class from my ViewController. Here's an example of a simple version of Util.h #import <Foundation/Foundation.h> @interface Util : NSObject { } - (void)myMethod; @end Then the Util.m file would look something like this: #import "Util.h" @implementation Util - (void)myMethod { NSLog(@"myMethod Called"); } @end Now that my Util class is created, I want to call the "myMethod" method from my ViewController. In my ViewController's .h file, I do the following: #import "Util.h" @interface MyViewController : UIViewController { Util *utils; } @property (assign) Util *utils; @end Finally, in the ViewController.m, I do the following: #import "Util.h" @implementation MyViewController @synthesize utils; - (void)viewDidLoad { [super viewDidLoad]; utils.myMethod; //this doesn't work [utils myMethod]; //this doesn't work either NSLog(@"utils = %@", utils); //in the console, this prints "utils = (null)" } What am I doing wrong? I'd like to not only be able to directly reference other classes/methods in a simple util class like this, but I'd also like to directly reference other ViewControllers and their properties and methods as well. I'm stumped! Please Help.

    Read the article

  • How do i write this jpql query? java

    - by Nitesh Panchal
    Hello, Say i have 5 tables, tblBlogs tblBlogPosts tblBlogPostComment tblUser tblBlogMember BlogId BlogPostsId BlogPostCommentId UserId BlogMemberId BlogTitle BlogId CommentText FirstName UserId PostTitle BlogPostsId BlogId BlogMemberId Now i want to retrieve only those blogs and posts for which blogMember has actually commented. So in short, how do i write this plain old sql :- Select b.BlogTitle, bp.PostTitle, bpc.CommentText from tblBlogs b Inner join tblBlogPosts bp on b.BlogId = bp.BlogId Inner Join tblBlogPostComment bpc on bp.BlogPostsId = bpc.BlogPostsId Inner Join tblBlogMember bm On bpc.BlogMemberId = bm.BlogMemberId Where bm.UserId = 1; As you can see, everything is Inner join, so only that row will be retrieved for which the user has commented on some post of some blog. So, suppose he has joined 3 blogs whose ids are 1,2,3 (The blogs which user has joined are in tblBlogMembers) but the user has only commented in blog 2 (of say BlogPostId = 1). So that row will be retrieved and 1,3 won't as it is Inner Join. How do i write this kind of query in jpql? In jpql, we can only write simple queries like say :- Select bm.blogId from tblBlogMember Where bm.UserId = objUser; Where objUser is supplied using :- em.find(User.class,1); Thus once we get all blogs(Here blogId represents a blog object) which user has joined, we can loop through and do all fancy things. But i don't want to fall in this looping business and write all this things in my java code. Instead, i want to leave that for database engine to do. So, how do i write the above plain sql into jpql? and what type of object the jpql query will return? because i am only selecting few fields from all table. In which class should i typecast the result to? I think i posted my requirement correctly, if i am not clear please let me know. Thanks in advance :).

    Read the article

  • ASP.NET MVC: How do I validate a model wrapped in a ViewModel?

    - by Deniz Dogan
    For the login page of my website I would like to list the latest news for my site and also display a few fields to let the user log in. So I figured I should make a login view model - I call this LoginVM. LoginVM contains a Login model for the login fields and a List<NewsItem> for the news listing. This is the Login model: public class Login { [Required(ErrorMessage="Enter a username.")] [DisplayName("Username")] public string Username { get; set; } [Required(ErrorMessage="Enter a password.")] [DataType(DataType.Password)] [DisplayName("Password")] public string Password { get; set; } } This is the LoginVM view model: public class LoginVM { public Login login { get; set; } public List<NewsItem> newsItems { get; set; } } This is where I get stuck. In my login controller, I get passed a LoginVM. [HttpPost] public ActionResult Login(LoginVM model, FormCollection form) { if (ModelState.IsValid) { // What? In the code I'm checking whether ModelState is valid and this would work fine if the view model was actually the Login model, but now it's LoginVM which has no validation attributes at all. How do I make LoginVM "traverse" through its members to validate them all? Am I doing something fundamentally wrong using ModelState in this manner?

    Read the article

< Previous Page | 559 560 561 562 563 564 565 566 567 568 569 570  | Next Page >