Search Results

Search found 74550 results on 2982 pages for 'wcf data service'.

Page 280/2982 | < Previous Page | 276 277 278 279 280 281 282 283 284 285 286 287  | Next Page >

  • How can i return IEnumarable data from function in GridView with Entity FrameWork?

    - by programmerist
    protected IEnumerable GetPersonalsData() { // List personel; using (FirmaEntities firmactx = new FirmaEntities()) { var personeldata = (from p in firmactx.Personals select new { p.ID, p.Name, p.SurName }); return personeldata.AsEnumerable(); } } i wan to send GetPersonelData() into GridView DataSource. Like That: gwPersonel.DataSource = GetPersonelData(); gwPersonel.DataBind(); it monitored to me on : gwPersonel.DataBind(); this error: "The ObjectContext instance has been disposed and can no longer be used for operations that require a connection."

    Read the article

  • Service Browser for AMF calls (Flex to Java)

    - by Tehsin
    Has anyone used or is aware of a service browser to test AMF calls? I am looking for a tool similar to ZamfBrowser ( http://www.zamfbrowser.org ), but one that works for the Java environment. ZamfBrowser is geared towards AMFPHP. The idea here is to provide a service browser, that allows developers to test Java services using the AMF protocol, without having to go through the Flex UI all the time. There has got to be something out there already for this, but I can't seem to locate anything..... It's kind of funny and strange that a service browser exists for AMFPHP but not for regular AMF calls in a Java environment. I would imagine something exists under Blaze or LCDS? ... Trying to find it in the docs but can't seem to find anything .... The best alternative I can think of at the moment is to use FlexMonkey to record stuff, and then to simulate it using that....which is okay I guess but still sucks because you have to go in and create the Flex UI first, whereas with something like ZamfBrowser, you simply point it at the service calls, it tells the server-side developers if their code works, etc. generates the required as3 classes for you... and makes the integration process much easier in a large team. Any help or insight would be appreciated :) Thanks!

    Read the article

  • File Locked by Services (after service code reading the text file)

    - by rvpals
    I have a windows services written in C# .NET. The service is running on a internal timer, every time the interval hits, it will go and try to read this log file into a String. My issue is every time the log file is read, the service seem to lock the log file. The lock on that log file will continue until I stop the windows service. At the same time the service is checking the log file, the same log file needs to be continuously updated by another program. If the file lock is on, the other program could not update the log file. Here is the code I use to read the text log file. private string ReadtextFile(string filename) { string res = ""; try { System.IO.FileStream fs = new System.IO.FileStream(filename, System.IO.FileMode.Open, System.IO.FileAccess.Read); System.IO.StreamReader sr = new System.IO.StreamReader(fs); res = sr.ReadToEnd(); sr.Close(); fs.Close(); } catch (System.Exception ex) { HandleEx(ex); } return res; } Thank you.

    Read the article

  • How do I improve the efficiency of the queries executed by this generic Linq-to-SQL data access clas

    - by Lee D
    Hi all, I have a class which provides generic access to LINQ to SQL entities, for example: class LinqProvider<T> //where T is a L2S entity class { DataContext context; public virtual IEnumerable<T> GetAll() { return context.GetTable<T>(); } public virtual T Single(Func<T, bool> condition) { return context.GetTable<T>().SingleOrDefault(condition); } } From the front end, both of these methods appear to work as you would expect. However, when I run a trace in SQL profiler, the Single method is executing what amounts to a SELECT * FROM [Table], and then returning the single entity that meets the given condition. Obviously this is inefficient, and is being caused by GetTable() returning all rows. My question is, how do I get the query executed by the Single() method to take the form SELECT * FROM [Table] WHERE [condition], rather than selecting all rows then filtering out all but one? Is it possible in this context? Any help appreciated, Lee

    Read the article

  • Difference between KeywordQuery, FullTextQuerySearch type for Object Model and Web service Query

    - by Raghu
    Initially I believed these 3 to be doing more or less the same thing with just the notation being different. Until recently, when i noticed that their does exists a big difference between the results of the KeyWordQuery/FullTextQuerySearch and Web service Query. I used both KeywordQuery and FullText method to search of the the value of a customColumn XYZ with value (ASDSADA-21312ASD-ASDASD):- When I run this query as:- FullTextSqlQuery:- FullTextSqlQuery myQuery = new FullTextSqlQuery(site); { // Construct query text String queryText = "Select title, path, author, isdocument from scope() where freetext('ASDSADA-21312ASD-ASDASD') "; myQuery.QueryText = queryText; myQuery.ResultTypes = ResultType.RelevantResults; }; // execute the query and load the results into a datatable ResultTableCollection queryResults = myQuery.Execute(); ResultTable resultTable = queryResults[ResultType.RelevantResults]; // Load table with results DataTable queryDataTable = new DataTable(); queryDataTable.Load(resultTable, LoadOption.OverwriteChanges); I get the following result representing the document. * Title: TestPDF * path: http://SharepointServer/Shared Documents/Forms/DispForm.aspx?ID=94 * author: null * isDocument: false Do note the Path and isDocument fields of the above result. Web Service Method Then I tried a Web Service Query method. I used Sharepoint Search Service Tool available at http://sharepointsearchserv.codeplex.com/ and ran the same query i.e. Select title, path, author, isdocument from scope() where freetext('ASDSADA-21312ASD-ASDASD'). This time I got the following results:- * Title: TestPDF * path: http://SharepointServer/Shared Documents/TestPDF.pdf * author: null * isDocument: true Again note the path. While the search results from 2nd method are useful as they provide me the file path exactly, I can't seem to understand why is the method 1 not giving me the same results? Why is there a discrepancy between the two results?

    Read the article

  • RijndaelManaged Padding when data matches block size

    - by trampster
    If I use PKCS7 padding in RijndaelManaged with 16 bytes of data then I get 32 bytes of data output. It appears that for PKCS7 when the data size matches the block size it adds a whole extra block of data. If I use Zeros padding for 16 bytes of data I get out 16 bytes of data. So for Zeros padding if the data matches the block size then it doesn't pad. I have searched through the documentation and it says nothing about this difference in padding behavior. Can someone please point me to some kind of documentation which specifies what the padding behavior should be for the different padding modes when the data size matches the block size.

    Read the article

  • Caching Authentication Data

    - by PartlyCloudy
    Hi, I'm currently implementing a REST web service using CouchDB and RESTlet. The RESTlet layer is mainly for authentication and some minor filtering of the JSON data served by CouchDB: Clients <= HTTP = [ RESTlet <= HTTP = CouchDB ] I'm using CouchDB also to store user login data, because I don't want to add an additional database server for that purpose. Thus, each request to my service causes two CouchDB requests conducted by RESTlet (auth data + "real" request). In order to keep the service as efficent as possible, I want to reduce the number of requests, in this case redundant requests for login data. My idea now is to provide a cache (i.e.LRU-Cache via LinkedHashMap) within my RESTlet application that caches login data, because HTTP caching will probabily not be enough. But how do I invalidate the cache data, once a user changes the password, for instance. Thanks to REST, the application might run on several servers in parallel, and I don't want to create a central instance just to cache login data. Currently, I save requested auth data in the cache and try to auth new requests by using them. If a authentication fails or there is now entry available, I'll dispatch a GET request to my CouchDB storage in order to obtain the actual auth data. So in a worst case, users that have changed their data will perhaps still be able to login with their old credentials. How can I deal with that? Or what is a good strategy to keep the cache(s) up-to-date in general? Thanks in advance.

    Read the article

  • Windows Service doesn't start process with different credentials

    - by Marcus
    I have a Windows Service, running as a user, that should start several processes under different user credentials. I'm using the following code to start a process: Dim winProcess As New System.Diagnostics.Process With winProcess .StartInfo.Arguments = "some_args" .StartInfo.CreateNoWindow = True .StartInfo.ErrorDialog = False .StartInfo.FileName = "C:\TEMP\ProcessFromService\ProcessFromService\bin\Debug\ProcessFromService.exe" .StartInfo.UseShellExecute = False .StartInfo.WindowStyle = ProcessWindowStyle.Hidden 'Opgave WorkingDirectory kan soms tot problemen leiden, indien betreffende directory 'niet bereikbaar (rechten) is voor opgegeven gebruiker. 'Beter dus om deze niet op te geven. '.StartInfo.WorkingDirectory = My.Computer.FileSystem.SpecialDirectories.Temp .StartInfo.Domain = "" .StartInfo.UserName = "MyUserId" Dim strPassword As String = "MyPassword" Dim ssPassword As New Security.SecureString For Each chrPassword As Char In strPassword.ToCharArray ssPassword.AppendChar(chrPassword) Next .StartInfo.Password = ssPassword .Start() End With The process is correctly started when I use the same credentials as of which the Windows Service is running under. The process is not started, without any error, when I use different credentials. In other words: If the Windows Service is running as UserA then I can start a process running as UserA. If the Windows Service is running as UserB then I can not start a process running as UserA. I have created a test project in which I can reproduce this problem. If you put this project in C:\Temp then the used paths will be correct. You can download this test project here: https://dl.dropboxusercontent.com/u/5391091/ProcessFromService.zip NB: I hope this info is enough to explain it. If you need more info, please let me know and I will add it.

    Read the article

  • AIR: sync gui with data-base?

    - by John Isaacks
    I am going to be building an AIR application that shows a list (about 1-25 rows of data) from a data-base. The data-base is on the web. I want the list to be as accurate as possible, meaning as soon as the data-base data changes, the list displayed in the app should update asap. I do not know of anyway that the air application could be notified when there is a change, I am thinking I am going to have to poll the data-base at certain intervals to keep an up to date list. So my question is, first is there any way to NOT have to keep checking the data-base? or if I do keep have to keep checking the data-base what is a reasonable interval to do that at? Thanks.

    Read the article

  • HLSL How can one pass data between shaders / read existing colour value?

    - by RJFalconer
    Hello all, I have 2 HLSL ps2.0 shaders. Simplified, they are: Shader 1 Reads texture Outputs colour value based on this texture Shader 2 Needs to read in existing colour (or have it passed in/read from a register) Outputs the final colour which is a function of the previous colour (They need to be different shaders as I've reached the maximum vertex-shader outputs for 1 shader) My problem is I cannot work out how Shader 2 can access the existing fragment/pixel colour. Is the only way for shaders to interact really just the alpha blending options? These aren't sufficient if I want to use the colour as input to my function.

    Read the article

  • changing user in ubuntu

    - by Rahul Mehta
    Hi , this is my ls -all, the zfapi folder have the root right , how can i change this to www-data. Also Please advise what is the first root and secont root is ? Thanks drwxr-xr-x 4 www-data www-data 4096 2011-01-06 18:21 cdnapi -rw-r--r-- 1 www-data www-data 678 2010-08-30 12:02 config.js drwxr-xr-x 4 www-data www-data 4096 2010-11-23 15:55 css drwxr-xr-x 7 www-data www-data 4096 2010-11-17 13:12 images -rw-r--r-- 1 www-data www-data 25064 2010-12-17 18:26 index.html -rw-r--r-- 1 www-data www-data 19830 2010-12-18 11:24 init.js drwxr-xr-x 2 www-data www-data 4096 2010-12-02 12:34 lib -rw-r--r-- 1 www-data www-data 18758 2010-12-06 18:00 styles.css -rw-r--r-- 1 www-data www-data 1081 2010-10-21 17:56 testbganim.html drwxr-xr-x 2 www-data www-data 4096 2010-12-17 11:15 yapi drwxr-xr-x 7 root root 4096 2011-01-07 18:20 zfapi

    Read the article

  • Use queried json data in a function

    - by SztupY
    I have a code similar to this: $.ajax({ success: function(data) { text = ''; for (var i = 0; i< data.length; i++) { text = text + '<a href="#" id="Data_'+ i +'">' + data[i].Name + "</a><br />"; } $("#SomeId").html(text); for (var i = 0; i< data.length; i++) { $("#Data_"+i).click(function() { alert(data[i]); RunFunction(data[i]); return false; }); } } }); This gets an array of some data in json format, then iterates through this array generating a link for each entry. Now I want to add a function for each link that will run a function that does something with this data. The problem is that the data seems to be unavailable after the ajax success function is called (although I thought that they behave like closures). What is the best way to use the queried json data later on? (I think setting it as a global variable would do the job, but I want to avoid that, mainly because this ajax request might be called multiple times) Thanks.

    Read the article

  • How do I display data in a table and allow users to copy selected data?

    - by cfouche
    Hi I have a long list of data that I want to display in table format to users. The data changes when the user performs certain actions in my app, but it is not directly editable. So the user can create a reasonably big table of data, but he can't change individual cells' values. However, I do want the data to be copy-able. So I want it to be possible for the user to select some or all of the cells, and do a ctrl-C to copy the data to his clipboard, and then a ctrl-V to paste the data to an external text editor. At the moment, I'm displaying the data in a ListView with a GridView and this works perfectly, except that GridView doesn't allow one to copy data. What other options can I try? Ours is a WPF app, coding in c#.

    Read the article

  • How to use a data type (table) defined in another database in SQL2k8?

    - by Victor Rodrigues
    I have a Table Type defined in a database. It is used as a table-valued parameter in a stored procedure. I would like to call this procedure from another database, and in order to pass the parameter, I need to reference this defined type. But when I do DECLARE @table dbOtherDatabase.dbo.TypeName , it tells me that The type name 'dbOtherDatabase.dbo.TypeName' contains more than the maximum number of prefixes. The maximum is 1. How could I reference this table type?

    Read the article

  • After adding data files to a file group Is there a way to distribute the data into the new files?

    - by Blootac
    I have a database in a single file group, with a single file group. I've added 7 data files to this file group. Is there a way to rebalance the data over the 8 data files other than by telling sql server to empty the original? If this is the only way, is it possible to allow sql server to start writing to this file? MSDN says that once its empty its marked so no new data will be written to it. What I'm aiming for is 8 equally balanced data files. I'm running SQL Server 2005 standard edition. Thanks

    Read the article

  • CSC folder data access AND roaming profiles issues (Vista with Server 2003, then 2008)

    - by Alex Jones
    I'm a junior sysadmin for an IT contractor that helps small, local government agencies, like little towns and the like. One of our clients, a public library with ~ 50 staff users, was recently migrated from Server 2003 Standard to Server 2008 R2 Standard in a very short timeframe; our senior employee, the only network engineer, had suddenly put in his two weeks notice, so management pushed him to do this project before quitting. A bit hasty on management's part? Perhaps. Could we do anything about that? Nope. Do I have to fix this all by myself? Pretty much. The network is set up like this: a) 50ish staff workstations, all running Vista Business SP2. All staff use MS Outlook, which uses RPC-over-HTTPS ("Outlook Anywhere") for cached Exchange access to an offsite location. b) One new (virtualized) Server 2008 R2 Standard instance, running atop a Server 2008 R2 host via Hyper-V. The VM is the domain's DC, and also the site's one and only file server. Let's call that VM "NEWBOX". c) One old physical Server 2003 Standard server, running the same roles. Let's call it "OLDBOX". It's still on the network and accessible, but it's been demoted, and its shares have been disabled. No data has been deleted. c) Gigabit Ethernet everywhere. The organization's only has one domain, and it did not change during the migration. d) Most users were set up for a combo of redirected folders + offline files, but some older employees who had been with the organization a long time are still on roaming profiles. To sum up: the servers in question handle user accounts and files, nothing else (eg, no TS, no mail, no IIS, etc.) I have two major problems I'm hoping you can help me with: 1) Even though all domain users have had their redirected folders moved to the new server, and loggin in to their workstations and testing confirms that the Documents/Music/Whatever folders point to the new paths, it appears some users (not laptops or anything either!) had been working offline from OLDBOX for a long time, and nobody realized it. Here's the ugly implication: a bunch of their data now lives only in their CSC folders, because they can't access the share on OLDBOX and sync with it finally. How do I get this data out of those CSC folders, and onto NEWBOX? 2) What's the best way to migrate roaming profile users to non-roaming ones, without losing vital data like documents, any lingering PSTs, etc? Things I've thought about trying: For problem 1: a) Reenable the documents share on OLDBOX, force an Offline Files sync for ALL domain users, then copy OLDBOX's share's data to the equivalent share on NEWBOX. Reinitialize the Offline Files cache for every user. With this: How do I safely force a domain-wide Offline Files sync? Could I lose data by reenabling the share on OLDBOX and forcing the sync? Afterwards, how can I reinitialize the Offline Files cache for every user, without doing it manually, workstation by workstation? b) Determine which users have unsynced changes to OLDBOX (again, how?), search each user's CSC folder domain-wide via workstation admin shares, and grab the unsynched data. Reinitialize the Offline Files cache for every user. With this: How can I detect which users have unsynched changes with a script? How can I search each user's CSC folder, when the ownership and permissions set for CSC folders are so restrictive? Again, afterwards, how can I reinitialize the Offline Files cache for every user, without doing it manually, workstation by workstation? c) Manually visit each workstation, copy the contents of the CSC folder, and manually copy that data onto NEWBOX. Reinitialize the Offline Files cache for every user. With this: Again, how do I 'break into' the CSC folder and get to its data? As an experiment, I took one workstation's HD offsite, imaged it for safety, and then tried the following with one of our shop PCs, after attaching the drive: grant myself full control of the folder (failed), grant myself ownership of the folder (failed), run chkdsk on the whole drive to make sure nothing's messed up (all OK), try to take full control of the entire drive (failed), try to take ownership of the entire drive (failed) MS KB articles and Googling around suggests there's a utility called CSCCMD that's meant for this exact scenario...but it looks like it's available for XP, not Vista, no? Again, afterwards, how can I reinitialize the Offline Files cache for every user, without doing it manually, workstation by workstation? For problem 2: a) Figure out which users are on roaming profiles, and where their profiles 'live' on the server. Create new folders for them in the redirected folders repository, migrate existing data, and disable the roaming. With this: Finding out who's roaming isn't hard. But what's the best way to disable the roaming itself? In AD Users and Computers, or on each user's workstation? Doing it centrally on the server seems more efficient; that said, all of the KB research I've done turns up articles on how to go from local to roaming, not the other way around, so I don't have good documentation on this. In closing: we have good backups of NEWBOX and OLDBOX, but not of the workstations themselves, so anything drastic on the client side would need imaging and testing for safety. Thanks for reading along this far! Hopefully you can help me dig us out of this mess.

    Read the article

  • Cannot install windows service

    - by Matthew Dalton
    I have created a very simple window service using visual studio 2010 and .Net 4.0. This service has no functionality added from the default windows service project, other than an installer has been added. If i run installutil.exe appName.exe on my dev box or other windows 2008 R2 machines in our domain the windows service installs without issue. When i try to do this same thing on our customer site, it fails to install with the following error. Microsoft (R) .NET Framework Installation utility Version 4.0.30319.1 Copyright (c) Microsoft Corporation. All rights reserved. Exception occurred while initializing the installation: System.IO.FileLoadException: Could not load file or assembly 'file:///C:\TestService\WindowsService1.exe' or one of its dependencies. Operation is not supported. (Exception from HRESULT: 0x80131515). This solution has only 1 project and no dependencies added. I have tried it on multiple machines in our environment and two in our customers. The machines are all windows 2008 R2, both fresh installs. One machine has just .net 2.0 and .net 4.0. The other .net 2, 3, 3.5 and 4. I am a local admin on each of the machines. I have also tried the 64bit installer but get the following error, so i think the 32 bit one is the one to use. System.BadImageFormatException Any guidance would be appreciated. Thanks.

    Read the article

  • Is it bad practice to use an enum that maps to some seed data in a Database?

    - by skb
    I have a table in my database called "OrderItemType" which has about 5 records for the different OrderItemTypes in my system. Each OrderItem contains an OrderItemType, and this gives me referential integrity. In my middletier code, I also have an enum which matches the values in this table so that I can have business logic for the different types. My dev manager says he hates it when people do this, and I am not exactly sure why. Is there a better practice I should be following?

    Read the article

  • How does a custom accessor method implementation in Core Data look like?

    - by dontWatchMyProfile
    The documentation is pretty confusing on this one: The implementation of accessor methods you write for subclasses of NSManagedObject is typically different from those you write for other classes. If you do not provide custom instance variables, you retrieve property values from and save values into the internal store using primitive accessor methods. You must ensure that you invoke the relevant access and change notification methods (willAccessValueForKey:, didAccessValueForKey:, willChangeValueForKey:, didChangeValueForKey:, willChangeValueForKey:withSetMutation:usingObjects:, and didChangeValueForKey:withSetMutation:usingObjects:). NSManagedObject disables automatic key-value observing (KVO, see Key-Value Observing Programming Guide) change notifications, and the primitive accessor methods do not invoke the access and change notification methods. In accessor methods for properties that are not defined in the entity model, you can either enable automatic change notifications or invoke the appropriate change notification methods. Are there any examples that show how these look like?

    Read the article

  • Why does GetWindowThreadProcessId return 0 when called from a service

    - by Marve
    When using the following class in a console application, and having at least one instance of Notepad running, GetWindowThreadProcessId correctly returns a non-zero thread id. However, if the same code is included in a Windows Service, GetWindowThreadProcessId always returns 0 and no exceptions are thrown. Changing the user the service launches under to be the same as the one running the console application didn't alter the result. What causes GetWindowThreadProcessId to return 0 even if it is provided with a valid hwnd? And why does it function differently in the console application and the service? Note: I am running Windows 7 32-bit and targeting .NET 3.5. public class TestClass { [DllImport("user32.dll")] static extern uint GetWindowThreadProcessId(IntPtr hWnd, IntPtr ProcessId); public void AttachToNotepad() { var processesToAttachTo = Process.GetProcessesByName("Notepad") foreach (var process in processesToAttachTo) { var threadID = GetWindowThreadProcessId(process.MainWindowHandle, IntPtr.Zero); .... } } } Console Code: class Program { static void Main(string[] args) { var testClass = new TestClass(); testClass.AttachToNotepad(); } } Service Code: public class TestService : ServiceBase { private TestClass testClass = new TestClass(); static void Main() { ServiceBase.Run(new TestService()); } protected override void OnStart(string[] args) { testClass.AttachToNotepad(); base.OnStart(args); } protected override void OnStop() { ... } }

    Read the article

  • Base64 Encoded Data - DB or Filesystem

    - by Marty
    I have a new program that will be generating a lot of Base64 encoded audio and image data. This data will be served via HTTP in the form of XML and the Base64 data will be inline. These files will most likely break 20MB and higher. Would it be more efficient to serve these files directly from the filesystem or would it be feasible to store the data in a MySQL database? Caching will be set up but overall unnecessary because it is likely that this data will be purged shortly after it is created and served. i know that storing binary data in the DB is frowned upon in most circumstances but since this will all be character data I want to see what the consensus is. As of now, I am leaning toward storing them in the filesystem for efficiency reasons but if it is feasible to store them in a database it would be much easier to manage the data.

    Read the article

  • What should i do for accomodating large scale data storage and retrieval?

    - by kailashbuki
    There's two columns in the table inside mysql database. First column contains the fingerprint while the second one contains the list of documents which have that fingerprint. It's much like an inverted index built by search engines. An instance of a record inside the table is shown below; 34 "doc1, doc2, doc45" The number of fingerprints is very large(can range up to trillions). There are basically following operations in the database: inserting/updating the record & retrieving the record accoring to the match in fingerprint. The table definition python snippet is: self.cursor.execute("CREATE TABLE IF NOT EXISTS `fingerprint` (fp BIGINT, documents TEXT)") And the snippet for insert/update operation is: if self.cursor.execute("UPDATE `fingerprint` SET documents=CONCAT(documents,%s) WHERE fp=%s",(","+newDocId, thisFP))== 0L: self.cursor.execute("INSERT INTO `fingerprint` VALUES (%s, %s)", (thisFP,newDocId)) The only bottleneck i have observed so far is the query time in mysql. My whole application is web based. So time is a critical factor. I have also thought of using cassandra but have less knowledge of it. Please suggest me a better way to tackle this problem.

    Read the article

  • Dynamically add data stored in php to nested json

    - by HoGo
    I am trying to dynamicaly generate data in json for jQuery gantt chart. I know PHP but am totally green with JavaScript. I have read dozen of solutions on how dynamicaly add data to json, and tried few dozens of combinations and nothing. Here is the json format: var data = [{ name: "Sprint 0", desc: "Analysis", values: [{ from: "/Date(1320192000000)/", to: "/Date(1322401600000)/", label: "Requirement Gathering", customClass: "ganttRed" }] },{ name: " ", desc: "Scoping", values: [{ from: "/Date(1322611200000)/", to: "/Date(1323302400000)/", label: "Scoping", customClass: "ganttRed" }] }, <!-- Somoe more data--> }]; now I have all data in php db result. Here it goes: $rows=$db->fetchAllRows($result); $rowsNum=count($rows); And this is how I wanted to create json out of it: var data=''; <?php foreach ($rows as $row){ ?> data['name']="<?php echo $row['name'];?>"; data['desc']="<?php echo $row['desc'];?>"; data['values'] = {"from" : "/Date(<?php echo $row['from'];?>)/", "to" : "/Date(<?php echo $row['to'];?>)/", "label" : "<?php echo $row['label'];?>", "customClass" : "ganttOrange"}; } However this does not work. I have tried without loop and replacing php variables with plain text just to check, but it did not work either. Displays chart without added items. If I add new item by adding it to the list of values, it works. So there is no problem with the Gantt itself or paths. Based on all above I assume the problem is with adding plain data to json. Can anyone please help me to fix it?

    Read the article

< Previous Page | 276 277 278 279 280 281 282 283 284 285 286 287  | Next Page >