Search Results

Search found 17501 results on 701 pages for 'stored functions'.

Page 566/701 | < Previous Page | 562 563 564 565 566 567 568 569 570 571 572 573  | Next Page >

  • How can I SETF an element in a tree by an accessor?

    - by Willi Ballenthin
    We've been using Lisp in my AI course. The assignments I've received have involved searching and generating tree-like structures. For each assignment, I've ended up writing something like: (defun initial-state () (list 0 ; score nil ; children 0 ; value 0)) ; something else and building my functions around these "states", which are really just nested lists with some loosely defined structure. To make the structure more rigid, I've tried to write accessors, such as: (defun state-score ( state ) (nth 2 state)) This works for reading the value (which should be all I need to do in a nicely functional world. However, as time crunches, and I start to madly hack, sometimes I want a mutable structure). I don't seem to be able to SETF the returned ...thing (place? value? pointer?). I get an error with something like: (setf (state-score *state*) 10) Sometimes I seem to have a little more luck writing the accessor/mutator as a macro: (defmacro state-score ( state ) `(nth 2 ,state)) However I don't know why this should be a macro, so I certainly shouldn't write it as a macro (except that sometimes it works. Programming by coincidence is bad). What is an appropriate strategy to build up such structures? More importantly, where can I learn about whats going on here (what operations affect the memory in what way)?

    Read the article

  • What is difference between my atoi() calls?

    - by Lucas
    I have a big number stored in a string and try to extract a single digit. But what are the differences between those calls? #include <iostream> #include <string> int main(){ std::string bigNumber = "93485720394857230"; char tmp = bigNumber.at(5); int digit = atoi(&tmp); int digit2 = atoi(&bigNumber.at(5)) int digit3 = atoi(&bigNumber.at(12)); std::cout << "digit: " << digit << std::endl; std::cout << "digit2: " << digit2 << std::endl; std::cout << "digit3: " << digit3 << std::endl; } This will produce the following output. digit: 7 digit2: 2147483647 digit3: 57230 The first one is the desired result. The second one seems to me to be a random number, which I cannot find in the string. The third one is the end of the string, but not just a single digit as I expected, but up from the 12th index to the end of the string. Can somebody explain the different outputs to me? EDIT: Would this be an acceptable solution? char tmp[2] = {bigNumber.at(5), '\0'}; int digit = atoi(tmp); std::cout << "digit: " << digit << std::endl;

    Read the article

  • How to debug PHP?

    - by NutMotion
    Anyone's been trying himself at object oriented programming ? Most probably every developer I guess:D I for one have never studied OO design patterns thoroughly, and trying to put it all together now does prove at times thrilling, and many times frustrating also. Even more so when trying to do it in : PHP! All-in-all, my boss asked me to add some Database persistence functions to her server, but most of all, she asked me to translate her already working procedural code into a working Object Oriented code. Here I am, still standing on my PHP OO project. I'm (already) fed up with this "file logging only" PHP capability. I believe there must be some (free or not too much expansive) PHP debugging utility ? I've heard about Zend Studio and PHPEd so far, which didn't quite do the trick for whatever reasons. WIRCW("Which I don't Remember Correctly Why" lol) What say yé? on debugging PHP ? Is there a tool that provides a good debug mode? what's more, don't forget I'm not speaking about the classical web Request/response model. Talking about a debugging facility which can enable you to trigger a web service (aka client request) and go into debug mode on the SOAP web service side. Thks for any input.

    Read the article

  • Email function using templates. Includes via ob_start and global vars

    - by Geo
    I have a simple Email() class. It's used to send out emails from my website. <? Email::send($to, $subj, $msg, $options); ?> I also have a bunch of email templates written in plain HTML pierced with a few PHP variables. E.g. /inc/email/templates/account_created.php: <p>Dear <?=$name?>,</p> <p>Thank you for creating an account at <?=$SITE_NAME?>. To login use the link below:</p> <p><a href="https://<?=$SITE_URL?>/account" target="_blank"><?=$SITE_NAME?>/account</a></p> In order to have the PHP vars rendered I had to include the template into my function. But since include does not return the contents but rather just sends it directly to the output, I had to wrap it with the buffer functions: <? abstract class Email { public static function send($to, $subj, $msg, $options = array()) { /* ... */ ob_start(); include '/inc/email/templates/account_created.php'; $msg = ob_get_clean(); /* ... */ } } After that I realized that the PHP vars are not rendered as they are being inside of the function scope, so I had to globalize the variables inside of the template: <? global $SITE_NAME, $SITE_URL, $name; ?> <p>Dear <?=$name?>,</p> ... So the question is whether there is a more elegant solution to this? Mainly I am concerned about my workarounds using ob_start() and global. For some reason that seems to me odd. Or this is pretty much the common practice?

    Read the article

  • WYSIWYG in Doxygen

    - by Adam Shiemke
    I'm working on a fairly large project written in C. The idea was to build a library of modular blocks that can be reused across several platforms. Each module is assocaited with a word document in .docx format (huge pain to diff-merge). In these docs, an interface section is specified, listing datatypes and publicly accessable functions. These were often inconsistant with the actual implementation in code, and wading through all this documentation was a pain. I've been working to switch to doxygen to simplify document managemnet. I haven't found a good way to embed the previously written documentation into the doxygen output. I've copy-pasted them into sections and used modules to group the sources together, but the document sections look ugly in the comments (the output is pretty) and since doxygen takes a while to parse through our code (about 30 mins), validating formatting is a pain. Is there some way to WYSIWIG large blocks of documentation into doxygen? I feel this would improve the number of people documenting their code, and the quality of that documentation. I considered linking to html, but that splits out the documentation. I also considered putting them inline in html, but this also seems like a pain and would mean everyone needs a WYSIWIG HTML edditor (or some html skillz). Any ideas on how to make things easier and prettier? Thanks loads.

    Read the article

  • ASP.Net MVC 2 - How to set up a Cancel button with client side navigation

    - by arame3333
    Thanks to a previous question I found a useful link on multiple buttons. http://weblogs.asp.net/dfindley/archive/2009/05/31/asp-net-mvc-multiple-buttons-in-the-same-form.aspx What I want to do is have a cancel button on my page, similar to this; <button name="button" type="button" onclick="document.location.href=$('#cancelUrl').attr('href')">Cancel</button> <a id="cancelUrl" href="<%: Url.Action("Index", "Home") %>" style="display:none;"></a> However although this code works, I really want to go back to the previous page. For Web Forms I could use the javascript Back() or Go(-1) functions, but they relied on postbacks. I could of course hard code the previous page and controller as I have done above. However I am struggling to find links that explain to me how Url.Action works. Because if I do this, I also need to include an index parameter, and I am not clear how the syntax works for that. It seems odd the amount of coding to do this. Out of curiosity, I am also wondering how you TDD client side code like this.

    Read the article

  • Does WordPress clear $GLOBALS ?

    - by Brayn
    Hey What I want to do is to include one of my PHP scripts in a Word Press theme. The problem is that after I include the script file I can't access, inside functions in the theme file, variables declared in the script file . I have created a new file in the theme folder and added the same code as in header.php and if I open that file it works just fine. So as far as I can tell it's something Word Press related. /other/path/wordpress/wp-content/themes/theme-name/header.php // this is broken /other/path/wordpress/wp-content/themes/theme-name/test.php // this works /var/www/vhosts/domain/wordpress/ ->(symlink)-> /other/path/wordpress/ /other/path/wordpress/wp-content/themes/theme-name/header.php /var/www/vhosts/domain/include_file.php Content of: /var/www/vhosts/domain/include_file.php $global_var = 'global'; print_r($GLOBALS); // if I open this file directly this prints globals WITH $global_var; // if this file is included in header this prints all the WP stuff WITHOUT $global_var; Content of: /other/path/wordpress/wp-content/themes/theme-name/header.php require '/path/to/include_file.php'; print $global_var; // this prints 'global' as expected function test() { global $global_var; print $global_var; // this is NULL } test(); print_r($GLOBALS); // this prints all the WP stuff WITHOUT $global_var in it

    Read the article

  • Updating a local sqlite db that is used for local metadata & caching from an service?

    - by Pharaun
    I've searched through the site and haven't found a question/answer that quite answer my question, the closest one I found was: Syncing objects between two disparate systems best approach. Anyway to begun, because there is no RSS feeds available, I'm screen scrapping a webpage, hence it does a fetch then it goes through the webpage to scrap out all of the information that I'm interested in and dumps that information into a sqlite database so that I can query the information at my leisure without doing repeat fetching from the website. However I'm also storing various metadata on the data itself that is stored in the sqlite db, such as: have I looked at the data, is the data new/old, bookmark to a chunk of data (Think of it as a collection of unrelated data, and the bookmark is just a pointer to where I am in processing/reading of the said data). So right now my current problem is trying to figure out how to update the local sqlite database with new data and/or changed data from the website in a manner that is effective and straightforward. Here's my current idea: Download the page itself Create a temporary table for the parsed data to go into Do a comparison between the official and the temporary table and copy updates and/or new information to the official table This process seems kind of complicated because I would have to figure out how to determine if the data in the temporary table is new, updated, or unchanged. So I am wondering if there isn't a better approach or if anyone has any suggestion on how to architecture/structure such system?

    Read the article

  • Is saving to database just to get an ID a bad hack?

    - by Narsil
    I hope the title is not too confusing. I am trying to make folders with linq-to-sql objects' IDs. Actually I have to create folders before I should save them. I will use them to keep user uploaded files. As you can see I have to create the folder with the FileID before I can save it there. So I just save a record which will be edited or maybe deleted File newFile = new File(); ...//add some values to fields so they don't throw rule violations db.AddFile(newFile); db.Save(); System.IO.Directory.CreateDirectory("..Uploads/"+newFile.FileId.ToString()); After that I will have to edit some fields and save again. Of course user might stop upload and I would have to delete it. I know I can write a stored procedure to get the next available FileID but some other upload happening at the same time would get the same number. So they would write in same directory which is a thing I don't want. Should I go on with this, would there be some problems? Can you think of a better way?

    Read the article

  • .net IHTTPHandler Streaming SQL Binary Data

    - by Yisman
    Hello everybody I am trying to implement an ihttphandeler for streaming files. files may be tiny thumbnails or gigantic movies the binaries r stored in sql server i looked at a lot of code online but something does not make sense isnt streaming supposed to read the data piece by piece and move it over the line? most of the code seems to first read the whole field from mssql to memory and then use streaming for the output writing wouldnt it b more eficient to actually stream from disk directly to http byte by byte (or buffered chunks?) heres my code so far but cant figure out the correct combination of the sqlreader mode and the stream object and the writing system Public Sub ProcessRequest(ByVal context As HttpContext) Implements IHttpHandler.ProcessRequest context.Response.BufferOutput = False Dim FileField=safeparam(context.Request.QueryString("FileField")) Dim FileTable=safeparam(context.Request.QueryString("FileTable")) Dim KeyField=safeparam(context.Request.QueryString("KeyField")) Dim FileKey=safeparam(context.Request.QueryString("FileKey")) Using connection As New SqlConnection(ConfigurationManager.ConnectionStrings("Main").ConnectionString) Using command As New SqlCommand("SELECT " & FileField & "Bytes," & FileField & "Type FROM " & FileTable & " WHERE " & KeyField & "=" & FileKey, connection) command.CommandType = Data.CommandType.Text enbd using end using end sub please be aware that this sql command also returns the file extension (pdf,jpg,doc...) in the second field of the query thank you all very much

    Read the article

  • Import-Pssession is not importing cmdlets when used in a custom module

    - by Douglas Plumley
    I have a PowerShell script/function that works great when I use it in my PowerShell profile or manually copy/paste the function in the PowerShell window. I'm trying to make the function accessible to other members of my team as a module. I want to have the module stored in a central place so we can all add it to our PSModulePath. Here is a copy of the basic function: Function Connect-O365{ $o365cred = Get-Credential [email protected] $session365 = New-PSSession -ConfigurationName Microsoft.Exchange -ConnectionUri https://ps.outlook.com/powershell/ -Credential $o365cred -Authentication Basic -AllowRedirection Import-PSSession $session365 -AllowClobber } If I save this function in my PowerShell profile it works fine. I can dot source a *.ps1 script with this function in it and it works as well. The issue is when I save the function as a *.psm1 PowerShell script module. The function runs fine but none of the exported commands from the Import-PSSession are available. I think this may have something to do with the module scope. I'm looking for suggestions on how to get around this. EDIT When I create the following module and run Connect-O365 the imported cmdlets will not be available. $scriptblock = { Function Connect-O365{ $o365cred = Get-Credential [email protected] $session365 = New-PSSession -ConfigurationName Microsoft.Exchange -ConnectionUri "https://ps.outlook.com/powershell/" -Credential $o365cred -Authentication Basic -AllowRedirection Import-PSSession $session365 -AllowClobber } } New-Module -Name "Office 365" -ScriptBlock $scriptblock When I import the next module without the Connect-O365 function the imported cmdlets are available. $scriptblock = { $o365cred = Get-Credential [email protected] $session365 = New-PSSession -ConfigurationName Microsoft.Exchange -ConnectionUri "https://ps.outlook.com/powershell/" -Credential $o365cred -Authentication Basic -AllowRedirection Import-PSSession $session365 -AllowClobber } New-Module -Name "Office 365" -ScriptBlock $scriptblock This appears to be a scope issue of some sort, just not sure how to get around it.

    Read the article

  • How does the socket API accept() function work?

    - by Eli Bendersky
    The socket API is the de-facto standard for TCP/IP and UDP/IP communications (that is, networking code as we know it). However, one of its core functions, accept() is a bit magical. To borrow a semi-formal definition: accept() is used on the server side. It accepts a received incoming attempt to create a new TCP connection from the remote client, and creates a new socket associated with the socket address pair of this connection. In other words, accept returns a new socket through which the server can communicate with the newly connected client. The old socket (on which accept was called) stays open, on the same port, listening for new connections. How does accept work? How is it implemented? There's a lot of confusion on this topic. Many people claim accept opens a new port and you communicate with the client through it. But this obviously isn't true, as no new port is opened. You actually can communicate through the same port with different clients, but how? When several threads call recv on the same port, how does the data know where to go? I guess it's something along the lines of the client's address being associated with a socket descriptor, and whenever data comes through recv it's routed to the correct socket, but I'm not sure. It'd be great to get a thorough explanation of the inner-workings of this mechanism.

    Read the article

  • Which of these is better practice?

    - by Fletcher Moore
    You have a sequence of functions to execute. Case A: They do not depend on each other. Which of these is better? function main() { a(); b(); c(); } or function main() { a(); } function a() { ... b(); } function b() { ... c(); } Case B: They do depend on successful completion of the previous. function main() { if (a()) if (b()) c(); } or function main() { if (!a()) return false; if (!b()) return false; c(); } or function main() { a(); } function a() { ... // maybe return false b(); } funtion b() { ... // maybe return false c(); } Better, of course, means more maintainable and easier to follow.

    Read the article

  • How to pass a Dictionary variable to another procedure

    - by salvationishere
    I am developing a C# VS2008/SQL Server website application. I've never used the Dictionary class before, but I am trying to replace my Hashtable with a Dictionary variable. Here is a portion of my aspx.cs code: ... Dictionary<string, string> openWith = new Dictionary<string, string>(); for (int col = 0; col < headers.Length; col++) { @temp = (col + 1); @tempS = @temp.ToString(); @tempT = "@col" + @temp.ToString(); ... openWith.Add(@tempT, headers[col]); } ... for (int r = 0; r < myInputFile.Rows.Count; r++) { resultLabel.Text = ADONET_methods.AppendDataCT(myInputFile, openWith); } But this is giving me a compiler error on this last line: Argument '2': cannot convert from 'System.Collections.Generic.Dictionary' to 'string' How do I pass the entire openWith variable to AppendDataCT? AppendDataCT is the method that calls my SQL stored proc. I want to pass in the whole row where each row has a unique set of values that I want to add to my database table. For example, if each row requires values for cells A, B, and C, then I want to pass these 3 values to AppendDataCT, where all of these values are strings. How do I do this with Dictionary?

    Read the article

  • Combining cache methods - memcache/disk based

    - by Industrial
    Hi! Here's the deal. We would have taken the complete static html road to solve performance issues, but since the site will be partially dynamic, this won't work out for us. What we have thought of instead is using memcache + eAccelerator to speed up PHP and take care of caching for the most used data. Here's our two approaches that we have thought of right now: Using memcache on all<< major queries and leaving it alone to do what it does best. Usinc memcache for most commonly retrieved data, and combining with a standard harddrive-stored cache for further usage. The major advantage of only using memcache is of course the performance, but as users increases, the memory usage gets heavy. Combining the two sounds like a more natural approach to us, even though the theoretical compromize in performance. Memcached appears to have some replication features available as well, which may come handy when it's time to increase the nodes. What approach should we use? - Is it stupid to compromize and combine the two methods? Should we insted be focusing on utilizing memcache and instead focusing on upgrading the memory as the load increases with the number of users? Thanks a lot!

    Read the article

  • Using ember-resource with couchdb - how can i save my documents?

    - by Thomas Herrmann
    I am implementing an application using ember.js and couchdb. I choose ember-resource as database access layer because it nicely supports nested JSON documents. Since couchdb uses the attribute _rev for optimistic locking in every document, this attribute has to be updated in my application after saving the data to the couchdb. My idea to implement this is to reload the data right after saving to the database and get the new _rev back with the rest of the document. Here is my code for this: // Since we use CouchDB, we have to make sure that we invalidate and re-fetch // every document right after saving it. CouchDB uses an optimistic locking // scheme based on the attribute "_rev" in the documents, so we reload it in // order to have the correct _rev value. didSave: function() { this._super.apply(this, arguments); this.forceReload(); }, // reload resource after save is done, expire to make reload really do something forceReload: function() { this.expire(); // Everything OK up to this location Ember.run.next(this, function() { this.fetch() // Sub-Document is reset here, and *not* refetched! .fail(function(error) { App.displayError(error); }) .done(function() { App.log("App.Resource.forceReload fetch done, got revision " + self.get('_rev')); }); }); } This works for most cases, but if i have a nested model, the sub-model is replaced with the old version of the data just before the fetch is executed! Interestingly enough, the correct (updated) data is stored in the database and the wrong (old) data is in the memory model after the fetch, although the _rev attribut is correct (as well as all attributes of the main object). Here is a part of my object definition: App.TaskDefinition = App.Resource.define({ url: App.dbPrefix + 'courseware', schema: { id: String, _rev: String, type: String, name: String, comment: String, task: { type: 'App.Task', nested: true } } }); App.Task = App.Resource.define({ schema: { id: String, title: String, description: String, startImmediate: Boolean, holdOnComment: Boolean, ..... // other attributes and sub-objects } }); Any ideas where the problem might be? Thank's a lot for any suggestion! Kind regards, Thomas

    Read the article

  • highlight the text of the DOM range element,

    - by ganapati hegde
    I select some text on the html page(opened in firefox) using mouse,and using javascript functions, i create/get the rangeobject corresponding to the selected text. userSelection =window.getSelection(); var rangeObject = getRangeObject(userSelection); Now i want to highlight all the text which comes under the rangeobject.I am doing it like this, var span = document.createElement("span"); rangeObject.surroundContents(span); span.style.backgroundColor = "yellow"; Well,this works fine, only when the rangeobject(startpoint and endpoint) lies in the same textnode,then it highlights the corresponding text.Ex <p>In this case,the text selected will be highlighted properly, because the selected text lies under a single textnode</p> But if the rangeobject covers more than one textnode, then it is not working properlay, It highlights only the texts which lie in the first textnode,Ex <p><h3>In this case</h3>, only the text inside the header(h3) will be highlighted, not any text outside the header</p> Any idea how can i make, all the texts which comes under rangeobject,highlighted,independent of whether range lies in a single node or multiple node? Thanks....

    Read the article

  • Why are Managed Beans not loaded in Tomcat?

    - by c0d3x
    Hi, I created a JSF 2 web application with facelets. The libs for JSF where stored at tomcat/lib, to share it between several applications. I thought maybe it would be better to store the libs inside the WEB-INF/lib folder of the application, to get the application more independent from server configurations. Now when I start tomcat via eclipse, the managed beans are loaded and working. But when I start tomcat directly / standalone the managed beans are not loaded automatically. I used @ManagedBean @SessionScoped / @RequestScoped annotations to declare classes as managed beans. Why is this? What can I do to fix it? I don't use any faces-config.xml file yet. Thanks in advance. edited: Maybe this helps to see whats going on: javax.el.PropertyNotFoundException: /Artikel.xhtml @12,108 value="#{artikelBackingBean.nameFilterPattern}": Target Unreachable, identifier 'artikelBackingBean' resolved to null at com.sun.faces.facelets.el.TagValueExpression.getType(TagValueExpression.java:93) at com.sun.faces.renderkit.html_basic.HtmlBasicInputRenderer.getConvertedValue(HtmlBasicInputRenderer.java:95) at javax.faces.component.UIInput.getConvertedValue(UIInput.java:1008) at javax.faces.component.UIInput.validate(UIInput.java:934) at javax.faces.component.UIInput.executeValidate(UIInput.java:1189) at javax.faces.component.UIInput.processValidators(UIInput.java:691) at javax.faces.component.UIForm.processValidators(UIForm.java:243) at javax.faces.component.UIComponentBase.processValidators(UIComponentBase.java:1080) at javax.faces.component.UIViewRoot.processValidators(UIViewRoot.java:1180) at com.sun.faces.lifecycle.ProcessValidationsPhase.execute(ProcessValidationsPhase.java:76) at com.sun.faces.lifecycle.Phase.doPhase(Phase.java:101) at com.sun.faces.lifecycle.LifecycleImpl.execute(LifecycleImpl.java:118) at javax.faces.webapp.FacesServlet.service(FacesServlet.java:312) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191) at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:433) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:128) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:293) at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:849) at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:583) at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:454) at java.lang.Thread.run(Thread.java:619)

    Read the article

  • C: 8x8 -> 16 bit multiply precision guaranteed by integer promotions?

    - by craig-blome
    I'm trying to figure out if the C Standard (C90, though I'm working off Derek Jones' annotated C99 book) guarantees that I will not lose precision multiplying two unsigned 8-bit values and storing to a 16-bit result. An example statement is as follows: unsigned char foo; unsigned int foo_u16 = foo * 10; Our Keil 8051 compiler (v7.50 at present) will generate a MUL AB instruction which stores the MSB in the B register and the LSB in the accumulator. If I cast foo to a unsigned int first: unsigned int foo_u16 = (unsigned int)foo * 10; then the compiler correctly decides I want a unsigned int there and generates an expensive call to a 16x16 bit integer multiply routine. I would like to argue beyond reasonable doubt that this defensive measure is not necessary. As I read the integer promotions described in 6.3.1.1, the effect of the first line shall be as if foo and 10 were promoted to unsigned int, the multiplication performed, and the result stored as unsigned int in foo_u16. If the compiler knows an instruction that does 8x8-16 bit multiplications without loss of precision, so much the better; but the precision is guaranteed. Am I reading this correctly? Best regards, Craig Blome

    Read the article

  • VSTO Word ContentControls, Y U No have Name property?

    - by System.Cats.Lol
    When you add a VSTO (not Word native) content control, you specify the name: controls.AddContentControl(wordRange, "foo", wdType); Where controls is the VSTO (extended) Document.Controls collection. You can later look up the control by name: ContentControl myContentControl = controls["foo"]; So why in the world is there no Name property for ContentControl? (or ContentControlBase, or any of the other derivatives). I'm implementing a wrapper class for the Document.Controls property that lets you add or iterate the content controls. When iterating the underlying Document.Controls, there's no way to look up the name of each control. (We need it to return an instance of our ContentControl wrapper). So currently I'm doing this in our ContentControls wrapper class: public IEnumerator<IContentControl> GetEnumerator() { System.Collections.IEnumerator en = this.wordControls.GetEnumerator(); while (en.MoveNext()) { // VSTO Document.Controls includes all managed controls, not just // VSTO ContentControls; return only those. if (en.Current is Microsoft.Office.Tools.Word.ContentControl) { // The control's name isn't stored with the control, only when it was added, // so use a placeholder name for the wrapper. yield return new ContentControl("Unknown", (Microsoft.Office.Tools.Word.ContentControl)en.Current); } } } I'd prefer to not have to resort to keeping a map of names-to-wrapper-objects in our ContentControls object. Can anyone tell me how to get the control's name (the name parameter that was passed to Controls.Add()?

    Read the article

  • How to open the download window when a dynamically created link is clicked in asp.net

    - by Ranjana
    i have stored the txtfile in the database.i need to show the txtfile when i clik the link. and this link has to be created dynamically. my code below: aspx code: aspx.cs protected void Page_Load(object sender, EventArgs e) { if(!Page.IsPostBack) { DataTable dtassignment = new DataTable(); dtassignment = serviceobj.DisplayAssignment(Session["staffname"].ToString()); if (dtassignment != null) { Byte[] bytes = (Byte[])dtassignment.Rows[0]["Data"]; //download(dtassignment); } divlink.InnerHtml = ""; divlink.Visible = true; foreach (DataRow r in dtassignment.Rows) { divlink.InnerHtml += "<a href='" + "'onclick='download(dtassignment)'>" + r["Filename"].ToString() + "</a>" + "<br/>"; } } } - public void download(DataTable dtassignment) { System.Diagnostics.Debugger.Break(); Byte[] bytes = (Byte[])dtassignment.Rows[0]["Data"]; Response.Buffer = true; Response.Charset = ""; Response.Cache.SetCacheability(HttpCacheability.NoCache); Response.ContentType = dtassignment.Rows[0]["ContentType"].ToString(); Response.AddHeader("content-disposition", "attachment;filename=" + dtassignment.Rows[0]["FileName"].ToString()); Response.BinaryWrite(bytes); Response.Flush(); Response.End(); } i have got the link dynamically, but i did not able to download the txtfile when i clik the link. how to carry out this. pls help me out...

    Read the article

  • parse content away from structure in a binary file

    - by Jeff Godfrey
    Using C#, I need to read a packed binary file created using FORTRAN. The file is stored in an "Unformatted Sequential" format as described here (about half-way down the page in the "Unformatted Sequential Files" section): http://www.tacc.utexas.edu/services/userguides/intel8/fc/f_ug1/pggfmsp.htm As you can see from the URL, the file is organized into "chunks" of 130 bytes or less and includes 2 length bytes (inserted by the FORTRAN compiler) surrounding each chunk. So, I need to find an efficient way to parse the actual file payload away from the compiler-inserted formatting. Once I've extracted the actual payload from the file, I'll then need to parse it up into its varying data types. That'll be the next exercise. My first thoughts are to slurp up the entire file into a byte array using File.ReadAllBytes. Then, just iterate through the bytes, skipping the formatting and transferring the actual data to a second byte array. In the end, that second byte array should contain the actual file contents minus all the formatting, which I'd then need to go back through to get what I need. As I'm fairly new to C#, I thought there might be a better, more accepted way of tackling this. Also, in case it's helpful, these files could be fairly large (say 30MB), though most will be much smaller...

    Read the article

  • ASP.NET MVC WAP, SharePoint Designer and SVN

    - by David Lively
    All, I'm starting a new ASP.NET MVC project which requires some content management capabilities. The people who will be managing the content prefer to use SharePoint Designer (successor to FrontPage) to modify content. I'd like to allow them to keep doing that. The issues are: Since I'd like this to be a WAP, not a website project, how can I allow them to see their changes in action without requiring them to have Visual Studio on their local machines? Can I specify a "default" action for a controller so that given a url like /products/new_view_here Can I let them save pages (views) and see them in the browser without having to go through the check-in/build/deploy process? I'd like their changes to be stored in SVN; SharePoint designer seems to only support Visual SourceSafe (ugh) directly. The ideas I've come up with so far are Write an HTTP handler that implements the FrontPage Server Extensions protocol. This sounds time consuming, but I haven't yet looked at the protocol spec. However, it would allow me to perform whatever operations I want on the server side, including checking files into SVN. Ditch the WAP in favor of a website project. I do not like having the source present on the server, however. Also, will MVC work in a website project? Surely someone has tackled this problem before?

    Read the article

  • Database and logic layer for ASP.NET MVC application

    - by Ismail
    I'm going to start a new project which is going to be small initially but may grow to big over the years. I'm strongly convinced that I'm going to use ASP.NET MVC with jQuery for UI. I want to go for MySQL as database for some reasons but worried on few things. I've a good years of experience working on SQL Server databases and on one project I've had a bad experience creating and managing stored procedures on MySQL database. I'm totally new to Linq but I see that it is easier to use once you are familiar with it. First thing is that accessing data should be easy. So I thought I should use MySQL to Linq but somewhere I read that it is not directly supported but MySQL .NET connector adds support for EntityFramework. I don't know what are the pros and cons of it. I would love if I can implement repository pattern as it allows to apply filter in logic layer rather than in data access layer. Will it be possible if I use Entity Framework? I'm not clear on how I should go about all this or I should just forget every thing and directly use SQL to Linq on SQL Server. I'm also concerned about the performance. Someone told me that if we use Entity framework it fetches lot of data and then filter it. Is that right? So questions basically are - Is MySQL to Linq possible? If yes where can I get more details on it? Pros and cons of using EntityFramework with MySQL? Will it be easy to access data using EntityFramework with MySQL? Will I be able to implement repository patter which allows applying filter in logic layer rather than data access layer (when I use EntityFramework with MySQL) Does it fetches hell lot of data from database and then apply filter on it? If it sounds too many questions from my side in that case, if you can just let me know what you will do (with a considerable reason) in this situation as an experienced person in this area, that should answer my question.

    Read the article

  • Different EF Property DataType than Storage Layer Possible?

    - by dj_kyron
    Hi, I am putting together a WCF Data Service for PatientEntities using Entity Framework. My solution needs to address these requirements: Property DateOfBirth of entity Patient is stored in SQL Server as string. It would be ideal if the entity class did not also use the "string" type but rather a DateTime type. (I would expect this to be possible since we're abstracting away from the storage layer). Where could a conversion mechanism be put in place that would convert to and from DateTime/string so that the entity and SQL Server are in sync?. I cannot change the storage layer's structure, so I have to work around it. WCF Data Services (Read-only, so no need for saving changes) need to be used since clients will be able to use LINQ expressions to consume the service. They can generate results based on any given query scenario they need and not be constrained by a single method such as GetPatient(int ID). I've tried to use DTOs, but run into problem of mapping the ObjectContext to a DTO, I don't think that is theoretically possible...or too complicated if it is. I've tried to use Self Tracking Entities but they require the metadata from the .edmx file if I'm correct, and this isn't allowing a different property data type. I also want to add customizations to my Entity getter methods so that a property "MRN" of type "string" needs to have .Replace("MR~", string.Empty) performed before it is returned. I can add this to the getter methods but the problem with that is Entity Framework will overwrite that next time it refreshes the entity classes. Is there a permanent place I can put these? Should I use POCO instead? How would that work with WCF Data Services? Where would the service grab the metadata?

    Read the article

< Previous Page | 562 563 564 565 566 567 568 569 570 571 572 573  | Next Page >