Search Results

Search found 82005 results on 3281 pages for 'cost based data structure'.

Page 1447/3281 | < Previous Page | 1443 1444 1445 1446 1447 1448 1449 1450 1451 1452 1453 1454  | Next Page >

  • DataGridView: Scroll bar does not refreshed

    - by David.Chu.ca
    I am working (fixing bugs) on a project which was written in VS 2005. There is one DataGridView control on a form. When it is first time loaded, the control's data grid is populated with rows of data from a collection manually or in codes. Actually, there is method PopulateDataGrid() do the job. There is also another control on the form. When control is changed, the data grid will be cleared first and then rows are repopulated again through PopulateDataGrid(). The problem is that when the grid is refreshed, the vertical scroll bar does not get reset correctly. I thought it should be. Since the scroll bar is not reset, when I tried to click on grid and move down, I got exception: the max value of scroll bar was exceeded. All the settings for grid control are default values. For example, the ScrollBars is Both. The following is the only related place to set row auto size property: poDataGridView.AutoSizeRowsMode = DataGridViewAutoSizeRowsMode.DisplayedCellsExceptHeaders; I am not sure if there is any property I have to set in designer?

    Read the article

  • How to send XML and other post parameters via cURL in PHP

    - by tomaszs
    Hello. I've used code below to send XML to my REST API. $xml_string_data contains proper XML, and it is passed well to mypi.php: //set POST variables $url = 'http://www.server.cu/mypi.php'; $fields = array( 'data'=>urlencode($xml_string_data) ); //url-ify the data for the POST $fields_string = ""; foreach($fields as $key=>$value) { $fields_string .= $key.'='.$value.'&'; } rtrim($fields_string,'&'); echo $fields_string; //open connection $ch = curl_init(); curl_setopt($ch,CURLOPT_URL,$url); curl_setopt($ch, CURLOPT_RETURNTRANSFER, true); curl_setopt($ch,CURLOPT_POST,count($fields)); curl_setopt($ch,CURLOPT_POSTFIELDS,$fields_string); curl_setopt($ch,CURLOPT_HTTPHEADER,array ( "Expect: " )); //execute post $result = @curl_exec($ch); But when I've added other field: $fields = array( 'method' => "methodGoPay", 'data'=>urlencode($xml_string_data) ); It stopped to work. On the mypi.php I don't recieve any more POST parameters at all! Could you you please tell me what to do to send XML and other post parameters in one cURL request? Please don't suggest using any libraries, I wan't to acomplish it in plain PHP.

    Read the article

  • MVC JsonResult not working with chrome?

    - by Karsten Detmold
    i want jquery to take a JsonResult from my MVC controller but it does'nt receive any data! If I put the output into a textfile and enter its link its working so I think my jQuery is fine. Then I was testing with other browsers like chrome and I saw NOTHING. The requested page was just emtpy.. no errors. Also IE seems to have problems receiving my string.. only firefox displays the string but why? public JsonResult jsonLastRequests() { List<Request> requests = new List<Request>(); while (r.Read()) { requests.Add(new Models.Request() { ID = (int)r[0], SiteID = r[1].ToString(), Lat = r[2].ToString(), City = r[4].ToString(), CreationTime = (DateTime)r[5] }); } r.Close(); return Json(requests); } I found out that also if I want to return the JSON as string its not working! Its working with a string in all browsers now.. but jQuery is still not loading anything var url = "http://../jsonLastRequests"; var source = { datatype: "json", datafields: [ { name: 'ID' }, { name: 'SiteID' }, { name: 'Lat' }, { name: 'CreationTime' }, { name: 'City' }, ], id: 'id', url: url }; var dataAdapter = new $.jqx.dataAdapter(source, { downloadComplete: function (data, status, xhr) { }, loadComplete: function (data) { }, loadError: function (xhr, status, error) { } }); I fixed my problem by adding: access-control-allow-origin:*

    Read the article

  • Tracing\profiling instructions

    - by LeChuck2k
    Hi Y'all. I'd like to statistically profile my C code at the instruction level. I need to know how many additions, multiplications, devides, etc,... I'm performing. This is not your usual run of the mill code profiling requirement. I'm an algorithm developer and I want to estimate the cost of converting my code to hardware implementations. For this, I'm being asked the instruction call breakdown during run-time (parsing the compiled assembly isn't sufficient as it doesn't consider loops in the code). After looking around, It seems VMWare may offer a possible solution, but I still couldn't find the specific feature that will allow me to trace the instruction call stream of my process. Are you aware of any profiling tools which enable this?

    Read the article

  • Endianness and C API's: Specifically OpenSSL.

    - by Hassan Syed
    I have an algorithm that uses the following OpenSSL calls: HMAC_update() / HMAC_final() // ripe160 EVP_CipherUpdate() / EVP_CipherFinal() // cbc_blowfish These algorithm take a unsigned char * into the "plain text". My input data is comes from a C++ std::string::c_str() which originate from a protocol buffer object as a encoded UTF-8 string. UTF-8 strings are meant to be endian neutrial. However I'm a bit paranoid about how OpenSSL may perform operations on the data. My understanding is that encryption algorithms work on 8-bit blocks of data, and if a unsigned char * is used for pointer arithmetic when the operations are performed the algorithms should be endian neutral and I do not need to worry about anything. My uncertainty is compounded by the fact that I am working on a little-endian machine and have never done any real cross-architecture programming. My beliefs/reasoning are/is based on the following two properties std::string (not wstring) internally uses a 8-bit ptr and a the resulting c_str() ptr will itterate the same way regardless of the CPU architecture. Encryption algorithms are either by design, or by implementation, endian neutral. I know the best way to get a definitive answer is to use QEMU and do some cross-platform unit tests (which I plan to do). My question is a request for comments on my reasoning, and perhaps will assist other programmers when faced with similar problems.

    Read the article

  • How to Integrate SharePoint 2007/2010 and Google Apps?

    - by goober
    Hello All, My (smaller) company has an existing Google Apps Deployment, used for E-Mail / Calendar, etc. I'm looking into a SharePoint setup (2010 most likely). One of the best features is that new events are added to one's Outlook Calendar, e-mails can be sent automatically, etc. Naturally, this works best out-of-the-box with Exchange. I know I can add my own OpenID login system via an OpenID provider for SharePoint and get my users into the system. My question is, can anyone recommend the best way to go about making sure that events automatically find their way into users' calendars and e-mails on the Google Apps system? This would enable us to deploy SharePoint without worrying about migrating our e-mail system to Exchange first (Google Apps is more cost-effective for our needs and I'm required to keep it.) Thanks in advance for any help!

    Read the article

  • ASP.NET - Webservice not being called from javascript

    - by Robert
    Ok I'm stumped on this one. I've got a ASMX web service up and running. I can browse to it (~/Webservice.asmx) and see the .NET generated page and test it out.. and it works fine. I've got a page with a Script Manager with the webservice registered... and i can call it in javascript (Webservice.Method(...)) just fine. However, I want to use this Webservice as part of a jQuery Autocomplete box. So I have use the url...? and the Webservice is never being called. Here's the code. [WebService(Namespace = "http://tempuri.org/")] [WebServiceBinding(ConformsTo = WsiProfiles.BasicProfile1_1)] // To allow this Web Service to be called from script, using ASP.NET AJAX, uncomment the following line. [System.Web.Script.Services.ScriptService] public class User : System.Web.Services.WebService { public User () { //Uncomment the following line if using designed components //InitializeComponent(); } [WebMethod] public string Autocomplete(string q) { StringBuilder sb = new StringBuilder(); //doStuff return sb.ToString(); } web.config <system.web> <webServices> <protocols> <add name="HttpGet"/> <add name="HttpPost"/> </protocols> </webServices> And the HTML $(document).ready(function() { User.Autocomplete("rr", function(data) { alert(data); });//this works ("#<%=txtUserNbr.ClientID %>").autocomplete("/User.asmx/Autocomplete"); //doesn't work $.ajax({ url: "/User.asmx/Autocomplete", success: function() { alert(data); }, error: function(e) { alert(e); } }); //just to test... calls "error" function });

    Read the article

  • How to figure out which record has been deleted in an effiecient way?

    - by janetsmith
    Hi, I am working on an in-house ETL solution, from db1 (Oracle) to db2 (Sybase). We needs to transfer data incrementally (Change Data Capture?) into db2. I have only read access to tables, so I can't create any table or trigger in Oracle db1. The challenge I am facing is, how to detect record deletion in Oracle? The solution which I can think of, is by using additional standalone/embedded db (e.g. derby, h2 etc). This db contains 2 tables, namely old_data, new_data. old_data contains primary key field from tahle of interest in Oracle. Every time ETL process runs, new_data table will be populated with primary key field from Oracle table. After that, I will run the following sql command to get the deleted rows: SELECT old_data.id FROM old_data WHERE old_data.id NOT IN (SELECT new_data.id FROM new_data) I think this will be a very expensive operation when the volume of data become very large. Do you have any better idea of doing this? Thanks.

    Read the article

  • Reading strings and integers from .txt file and printing output as strings only

    - by screename71
    Hello, I'm new to C++, and I'm trying to write a short C++ program that reads lines of text from a file, with each line containing one integer key and one alphanumeric string value (no embedded whitespace). The number of lines is not known in advance, (i.e., keep reading lines until end of file is reached). The program needs to use the 'std::map' data structure to store integers and strings read from input (and to associate integers with strings). The program then needs to output string values (but not integer values) to standard output, 1 per line, sorted by integer key values (smallest to largest). So, for example, suppose I have a text file called "data.txt" which contains the following three lines: 10 dog -50 horse 0 cat -12 zebra 14 walrus The output should then be: horse zebra cat dog walrus I've pasted below the progress I've made so far on my C++ program: #include <fstream> #include <iostream> #include <map> using namespace std; using std::map; int main () { string name; signed int value; ifstream myfile ("data.txt"); while (! myfile.eof() ) { getline(myfile,name,'\n'); myfile >> value >> name; cout << name << endl; } return 0; myfile.close(); } Unfortunately, this produces the following incorrect output: horse cat zebra walrus If anyone has any tips, hints, suggestions, etc. on changes and revisions I need to make to the program to get it to work as needed, can you please let me know? Thanks!

    Read the article

  • Heavy Mysql operation & Time Constraints [closed]

    - by Rahul Jha
    There is a performance issue where that I have stuck with my application which is based on PHP & MySql. The application is for Data Migration where data has to be uploaded and after various processes (Cleaning from foreign characters, duplicate check, id generation) it has to be inserted into one central table and then to 5 different tables. There, an id is generated and that id has to be updated to central table. There are different sets of records and validation rules. The problem I am facing is that when I insert say(4K) rows file (containing 20 columns) it is working fine within 15 min it gets inserted everywhere. But, when I insert the same records again then at this time it is taking one hour to insert (ideally it should get inserted by marking earlier inserted data as duplicate). After going through the log file, I noticed is that there is a Mysql select statement where I am checking the duplicates and getting ID which are duplicates. Then I am calling a function inside for loop which is basically inserting records into 5 tables and updates id to central table. This Calling function is major time of whole process. P.S. The records has to be inserted record by record.. Kindly Suggest some solution.. //This is that sample code $query=mysql_query("SELECT DISTINCT p1.ID FROM table1 p1, table2 p2, table3 a WHERE p2.datatype =0 AND (p1.datatype =1 || p1.datatype=2) AND p2.ID =0 AND p1.ID = a.ID AND p1.coulmn1 = p2.column1 AND p1.coulmn2 = p2.coulmn2 AND a.coulmn3 = p2.column3"); $num=mysql_num_rows($query); for($i=0;$i<$num;$i++) { $f=mysql_result($query,$i,"ID"); //calling function RecordInsert($f); }

    Read the article

  • What can i use to journal writes to file system

    - by Dmitry
    Hello, all I need to track all writes to files in order to have synchronized version of files on different place (server or just other directory, not considerable). Let it: all files located in same directory feel free to create some system files (e.g. SomeFileName.Ext~temp-data) no one have concurrent access to synced directory; nobody spoil ours meta-files or change real-files before we do postponed writes (like a commits) do not to care recovering "local" changes in case of crash; system can just rolled back to state of "server" by simple copy from it significant to have it transparent to use (so programmer must just call ordinary fopen(), read(), write()) It must be guaranteed that copy of files which "server" have is consistent. That is whole files scope existed in some moment of time. They may be sufficiently outdated but it must be fair snapshot of all files at some time. As i understand i should overload writing logic to collect data in order sent changes to "server". For example writing to temporary File~tmp. And so i have to overload reads in order program could read actual data of file. It would be great if you suggest some existing library (java or c++, it is unimportant) or solution (VCS customizing?). Or give hints how should i write it by myself. edit: After some reading i have more precision requirements: I need COW (Copy-on-write) wrapper for fopen(),fwrite(),.. or interceptor (hook) WriteFile() and other FS api system calls. Log-structured file system in userspace would be a alternative too.

    Read the article

  • What exactly is a reentrant function?

    - by eSKay
    Most of the times, the definition of reentrance is quoted from Wikipedia: A computer program or routine is described as reentrant if it can be safely called again before its previous invocation has been completed (i.e it can be safely executed concurrently). To be reentrant, a computer program or routine: Must hold no static (or global) non-constant data. Must not return the address to static (or global) non-constant data. Must work only on the data provided to it by the caller. Must not rely on locks to singleton resources. Must not modify its own code (unless executing in its own unique thread storage) Must not call non-reentrant computer programs or routines. How is safely defined? If a program can be safely executed concurrently, does it always mean that it is reentrant? What exactly is the common thread between the six points mentioned that I should keep in mind while checking my code for reentrant capabilities? Also, Are all recursive functions reentrant? Are all thread-safe functions reentrant? Are all recursive and thread-safe functions reentrant? While writing this question, one thing comes to mind: Are the terms like reentrance and thread safety absolute at all i.e. do they have fixed concrete definations? For, if they are not, this question is not very meaningful. Thanks!

    Read the article

  • Macro and array crossing

    - by Thomas
    I am having a problem with a lisp macro. I would like to create a macro which generate a switch case according to an array. Here is the code to generate the switch-case: (defun split-elem(val) `(,(car val) ',(cdr val))) (defmacro generate-switch-case (var opts) `(case ,var ,(mapcar #'split-elem opts))) I can use it with a code like this: (generate-switch-case onevar ((a . A) (b . B))) But when I try to do something like this: (defparameter *operators* '((+ . OPERATOR-PLUS) (- . OPERATOR-MINUS) (/ . OPERATOR-DIVIDE) (= . OPERATOR-EQUAL) (* . OPERATOR-MULT))) (defmacro tokenize (data ops) (let ((sym (string->list data))) (mapcan (lambda (x) (generate-switch-case x ops)) sym))) (tokenize data *operators*) I got this error: *** - MAPCAR: A proper list must not end with OPS. But I don't understand why. When I print the type of ops I get SYMBOL I was expecting CONS, is it related? Also, for my function tokenize how many times the lambda is evaluated (or the macro expanded)?

    Read the article

  • Can I commit changes to actual database while debugging C# in Visual Studio?

    - by nathant23
    I am creating a C# application using Visual Studio that uses an SQLExpress database. When I hit f5 to debug the application and make changes to the database I believe what is happening is there is a copy of the database in the bin/debug folder that changes are being made to. However, when I stop the debugging and then hit f5 the next time a new copy of the database is being put in the bin/debug folder so that all the changes made the last time are gone. My question is: Is there a way that when I am debugging the application I can have it make changes to the actual database and those changes are actually saved or will it only make changes to the copy in the bin/debug folder (if that is what is actually happening)? I've seen similar questions, but I couldn't find an answer that said if it's possible to make those changes persistent in the actual .mdf file. The reason I ask is because as I build this application I am continuously adding pieces and testing to make sure they all work together. When I put in test data I am using actual data that I would like to stay in the database. This would just help me not have to reenter the data later. Thanks in advance for any help or information that could help me better understand the process.

    Read the article

  • Writing csv file in asp.net

    - by Keith
    Hello, I'm trying to export data to a csv file, as there are chinese characters in the data i had to use unicode.. but after adding the preamble for unicode, the commas are not recognized as delimiters and all data are now written to the first column. I'm not sure what is wrong. Below is my code which i wrote in a .ashx file. DataView priceQuery = (DataView)context.Session["priceQuery"]; String fundName = priceQuery.Table.Rows[0][0].ToString().Trim().Replace(' ', '_'); context.Response.Clear(); context.Response.ClearContent(); context.Response.ClearHeaders(); context.Response.ContentType = "text/csv"; context.Response.ContentEncoding = System.Text.Encoding.Unicode; context.Response.AddHeader("Content-Disposition", "attachment; filename=" + fundName + ".csv"); context.Response.BinaryWrite(System.Text.Encoding.Unicode.GetPreamble()); String output = fundName + "\n"; output += "Price, Date" + "\n"; foreach (DataRow row in priceQuery.Table.Rows) { string price = row[2].ToString(); string date = ((DateTime)row[1]).ToString("dd-MMM-yy"); output += price + "," + date + "\n"; } context.Response.Write(output);

    Read the article

  • (Newbie) Amazon Web Services Apache Server

    - by Samnsparky
    Hello! I am trying to get a feel for the costs imposed by running apache on AWS continually. Assuming that the service is scarcely used, does anyone know how many cpu hours that would eat up in a month just by sitting there and running? I understand that this is slightly impractical but I am trying to figure out what the cost of entry is to deploy an application on this platform (as compared to GAE). I suspect it to be small but I would like to know. Thank you for your help, Sam

    Read the article

  • How to write this snippet in Python?

    - by morpheous
    I am learning Python (I have a C/C++ background). I need to write something practical in Python though, whilst learning. I have the following pseudocode (my first attempt at writing a Python script, since reading about Python yesterday). Hopefully, the snippet details the logic of what I want to do. BTW I am using python 2.6 on Ubuntu Karmic. Assume the script is invoked as: script_name.py directory_path import csv, sys, os, glob # Can I declare that the function accepts a dictionary as first arg? def getItemValue(item, key, defval) return !item.haskey(key) ? defval : item[key] dirname = sys.argv[1] # declare some default values here weight, is_male, default_city_id = 100, true, 1 # fetch some data from a database table into a nested dictionary, indexed by a string curr_dict = load_dict_from_db('foo') #iterate through all the files matching *.csv in the specified folder for infile in glob.glob( os.path.join(dirname, '*.csv') ): #get the file name (without the '.csv' extension) code = infile[0:-4] # open file, and iterate through the rows of the current file (a CSV file) f = open(infile, 'rt') try: reader = csv.reader(f) for row in reader: #lookup the id for the code in the dictionary id = curr_dict[code]['id'] name = row['name'] address1 = row['address1'] address2 = row['address2'] city_id = getItemValue(row, 'city_id', default_city_id) # insert row to database table finally: f.close() I have the following questions: Is the code written in a Pythonic enough way (is there a better way of implementing it)? Given a table with a schema like shown below, how may I write a Python function that fetches data from the table and returns is in a dictionary indexed by string (name). How can I insert the row data into the table (actually I would like to use a transaction if possible, and commit just before the file is closed) Table schema: create table demo (id int, name varchar(32), weight float, city_id int); BTW, my backend database is postgreSQL

    Read the article

  • Implementation of MVC with SQLite and NSURLConnection, use cases?

    - by user324723
    I'm interested in knowing how others have implemented/designed database & web services in their iphone app and how they simplified it for the entire application. My application is dependent on these services and I can't figure out a efficient way to use them together due to the (semi)complexity of my requirements. My past attempts on combining them haven't been completely successful or at least optimal in my mind. I'm building a database driven iphone app that uses a relational database in sqlite and consumes web services based on missing content or user interaction. Like this hasn't been done before...right? Since I am using a relational database - any web services consumed requires normalization, parsing the result and persisting it to the database before it can be displayed in a table view controller. The applications UI consists of nested(nav controller) table views where a user can select a cell and be taken to the next table view where it attempts to populate the table views data source from the database. If nothing exists in the database then it will send a request via web services to download its content, thus download - parse - persist - query - display. Since the user has the ability to request a refresh of this data it still requires the same process. Quickly describing what I've implemented and tried to run with - 1st attempt - Used a singleton web service class that handled sending web service requests, parsing the result and returning it to the table view controller via delegate protocols. Once the controller received that data it would then be responsible for persisting it to the database and re-returning the result. I didn't like the idea of only preventing the case where the app delegate selector doesn't exists(released) causing the app to crash. 2nd attempt - Used NSNotificationCenter for easy access to both database and web services but later realized it was more complex due to adding and removing observers per view(which isn't advised anyways).

    Read the article

  • How can I intercept an exception occurred during serialization in WCF?

    - by bonomo
    I have a legit data object with all data contract / data member attributes. For some reason the WCF service crashes after the operation has completed and the result is passed as a return value. I believe it has something to do with WCF not being able to serialize that result properly. The test client doesn't say anything specific: The underlying connection was closed: The connection was closed unexpectedly. Server stack trace: at System.ServiceModel.Channels.HttpChannelUtilities.ProcessGetResponseWebException(WebException webException, HttpWebRequest request, HttpAbortReason abortReason) at System.ServiceModel.Channels.HttpChannelFactory.HttpRequestChannel.HttpChannelRequest.WaitForReply(TimeSpan timeout) at System.ServiceModel.Channels.RequestChannel.Request(Message message, TimeSpan timeout) at System.ServiceModel.Dispatcher.RequestChannelBinder.Request(Message message, TimeSpan timeout) at System.ServiceModel.Channels.ServiceChannel.Call(String action, Boolean oneway, ProxyOperationRuntime operation, Object[] ins, Object[] outs, TimeSpan timeout) at System.ServiceModel.Channels.ServiceChannelProxy.InvokeService(IMethodCallMessage methodCall, ProxyOperationRuntime operation) at System.ServiceModel.Channels.ServiceChannelProxy.Invoke(IMessage message) Exception rethrown at [0]: at System.Runtime.Remoting.Proxies.RealProxy.HandleReturnMessage(IMessage reqMsg, IMessage retMsg) at System.Runtime.Remoting.Proxies.RealProxy.PrivateInvoke(MessageData& msgData, Int32 type) at IFacade.PickSecurities(String pattern, Int32 atMost) at FacadeClient.PickSecurities(String pattern, Int32 atMost) Inner Exception: The underlying connection was closed: The connection was closed unexpectedly. at System.Net.HttpWebRequest.GetResponse() at System.ServiceModel.Channels.HttpChannelFactory.HttpRequestChannel.HttpChannelRequest.WaitForReply(TimeSpan timeout) I am in control of creating the instance of the service using a customized service host factory. I know I can set up trace listeners and check the logs, but it's a lot of hassle to do. So I would rather handle it explicitly on the server at the time it happens. So I how can I intercept that exception programmatically and return an appropriate fault meassage?

    Read the article

  • Barcode to Product Name Converter

    - by spagetticode
    Hi All, We are designing a mobile shopping system. The camera on the phone will read the barcode and then we have to convert the barcode to a standard product name in order to save it to our database. We are saving it to our database because we are connecting to a web service of local e-commerce sites to get their price about the related product. We are sending the product name to get the price from them, so that the user can see the prices, compare and buy. We cannot send barcode number to get the data from the e-commerce sites because some sites do not have the info of the barcode number. I have to somehow get the product name by only knowing the barcode. Google returns the result when barcode number is searched. But how am I going to parse the data? or how am I going to know which answer of google search best suits my input? Is there a site that sells barcode and product name data match? We are designing the system with C# Thanks alot.

    Read the article

  • Why are my connections not closed even if I explicitly dispose of the DataContext?

    - by Chris Simpson
    I encapsulate my linq to sql calls in a repository class which is instantiated in the constructor of my overloaded controller. The constructor of my repository class creates the data context so that for the life of the page load, only one data context is used. In my destructor of the repository class I explicitly call the dispose of the DataContext though I do not believe this is necessary. Using performance monitor, if I watch my User Connections count and repeatedly load a page, the number increases once per page load. Connections do not get closed or reused (for about 20 minutes). I tried putting Pooling=false in my config to see if this had any effect but it did not. In any case with pooling I wouldn't expect a new connection for every load, I would expect it to reuse connections. I've tried putting a break point in the destructor to make sure the dispose is being hit and sure enough it is. So what's happening? Some code to illustrate what I said above: The controller: public class MyController : Controller { protected MyRepository rep; public MyController () { rep = new MyRepository(); } } The repository: public class MyRepository { protected MyDataContext dc; public MyRepository() { dc = getDC(); } ~MyRepository() { if (dc != null) { //if (dc.Connection.State != System.Data.ConnectionState.Closed) //{ // dc.Connection.Close(); //} dc.Dispose(); } } // etc } Note: I add a number of hints and context information to the DC for auditing purposes. This is essentially why I want one connection per page load

    Read the article

  • jQuery/ajax working on IIS5.1 but not IIS6

    - by Mikejh99
    I'm running a weird issue here. I have code that makes jquery ajax calls to a web service and dynamically adds controls using jquery. Everything works fine on my dev machine running IIS 5.1, but not when deployed to IIS 6. I'm using VS2010/ASP.Net 4.0, C#, jQuery 1.4.2 and jQuery UI 1.8.1. I'm using the same browser for each. It partially works though. The code will add the controls to the page, but they aren't visible until I click them (they aren't visible though). I thought this was a css issue, but the styles are there too. The ajax calls look like this: $.ajax({ url: "/WebServices/AssetManager.asmx/Assets", type: "POST", datatype: "json", async: false, data: "{'q':'" + req.term + "', 'type':'Condition'}", contentType: "application/javascript; charset=utf-8", success: function (data) { res($.map(data.d, function (item) { return { label: item.Name, value: item.Name, id: item.Id, datatype: item.DataType } })) } }) Changing the content-type makes the autocomplete fail. I've quadruple checked and all the paths are correct, there is no document footer enabled in IIS, and I'm not using IIS compression. Any idea why the page will display and work properly in IIS 5 but only partially in IIS 6? (If it failed completely, that'd make more sense!). Is it a jQuery or CSS issue?

    Read the article

  • Updating a listitem in an ASPX page in SharePoint Designer

    - by Andy
    Hey All, Right now I'm using SharePoint Designer to create a new aspx page. I am using a data view to display information from a list. One of the fields in the list is a choice field. I was wondering if there was anyway that I could display all of the other fields but allow one field in the list to be edited on the page without adding an edit link. Ideally, I would like a user to go in and be able to edit a field value (hopefully in a drop down list) within a data view without being redirected to the list or a form. I'm thinking there is a way to do this through javascript to embed inside the HTML or through a workflow of some sort. I'm new to javascript and don't know how to do this. I have tried to insert a drop down list and provide a data source for it but it will only show all of the field values in the list. Thus, I am unable to display the choice options, show the current value in the listitem and edit/update the listitem. Hopefully this makes sense. Can anyone help me out here? Thanks a lot, Andy

    Read the article

  • Javascript + PHP $_POST array empty

    - by Peterim
    While trying to send a POST request via xmlhttp.open("POST", "url", true) (javascript) to the server I get an empty $_POST array. Firebug shows that the data is being sent. Here is the data string from Firebug: a=1&q=151a45a150.... But $_POST['q'] returns nothing. The interesting thing is that file_get_contents('php://input') does have my data (the string above), but PHP somehow doesn't recognize it. Tried both $_POST and $_REQUEST, nothing works. Headers being sent: POST /test.php HTTP/1.1 Host: website.com User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-us;q=0.7,en;q=0.3 Accept-Encoding: gzip,deflate Accept-Charset: utf-8;q=0.7,*;q=0.7 Keep-Alive: 115 Connection: keep-alive Referer: http://website.com/ Content-Length: 156 Content-Type: text/plain; charset=UTF-8 Pragma: no-cache Cache-Control: no-cache Thank you for any suggestions.

    Read the article

  • Rendering blocks side-by-side with FOP

    - by Rolf
    I need to generate a PDF from XML data using Apache FOP. The problem is that FOP doesn't support fo:float, and I really need to have items (boxes of rendered data) side-by-side in the PDF. More precisely, I need them in a 4x4 grid on each page, like so: In HTML, I would simply render these as left-floated divs with appropriate widths and heights. My data looks something like this: <item id="1"> <a>foo</a> <b>bar</b> <c>baz</c> </item> <item id="2">...</item> ... <item id="n">...</item> I considered using a two-column region-body, but then the order of items would be 1, 3, 2, 4 (reading from left to right) since they would be rendered tb-lr instead of lr-tb, and I need them to be in the correct order (id in above xml). I suppose I could try using a table, but I'm not quite sure how to group the items into table rows. So, some kind of workaround for the lack of fo:float would be greatly appreciated.

    Read the article

< Previous Page | 1443 1444 1445 1446 1447 1448 1449 1450 1451 1452 1453 1454  | Next Page >