Search Results

Search found 71953 results on 2879 pages for 'load data infile'.

Page 188/2879 | < Previous Page | 184 185 186 187 188 189 190 191 192 193 194 195  | Next Page >

  • What is optimal hardware configuration for heavy load LAMP application

    - by Piotr Kochanski
    I need to run Linux-Apache-PHP-MySQL application (Moodle e-learning platform) for a large number of concurrent users - I am aiming 5000 users. By concurrent I mean that 5000 people should be able to work with the application at the same time. "Work" means not only do database reads but writes as well. The application is not very typical, since it is doing a lot of inserts/updates on the database, so caching techniques are not helping to much. We are using InnoDB storage engine. In addition application is not written with performance in mind. For instance one Apache thread usually occupies about 30-50 MB of RAM. I would be greatful for information what hardware is needed to build scalable configuration that is able to handle this kind of load. We are using right now two HP DLG 380 with two 4 core processors which are able to handle much lower load (typically 300-500 concurrent users). Is it reasonable to invest in this kind of boxes and build cluster using them or is it better to go with some more high-end hardware? I am particularly curious how many and how powerful servers are needed (number of processors/cores, size of RAM) what network equipment should be used (what kind of switches, network cards) any other hardware, like particular disc storage solutions, etc, that are needed Another thing is how to put together everything, that is what is the most optimal architecture. Clustering with MySQL is rather hard (people are complaining about MySQL Cluster, even here on Stackoverflow).

    Read the article

  • Modify jkpanel to load internal content instead of external content

    - by Adam Stone
    I am using an implementation of jkpanel in a vb.net app, the script (see below) loads an external file into the drop down panel. I need to modify this script to load an internal content like a specific div or span so that I can have this nested within the master page. //Drop Down Panel script (March 29th, 08'): By JavaScript Kit: http://www.javascriptkit.com var jkpanel={ controltext: 'Panel Content', $mainpanel: null, contentdivheight: 0, openclose:function($, speed){ this.$mainpanel.stop() //stop any animation if (this.$mainpanel.attr('openstate')=='closed') this.$mainpanel.animate({top: 0}, speed).attr({openstate: 'open'}) else this.$mainpanel.animate({top: -this.contentdivheight+'px'}, speed).attr({openstate: 'closed'}) }, init:function(file, height, speed){ jQuery(document).ready(function($){ jkpanel.$mainpanel=$('<div id="dropdownpanel"><div class="contentdiv"></div><div class="control">'+jkpanel.controltext+'</div></div>').prependTo('body') var $contentdiv=jkpanel.$mainpanel.find('.contentdiv') var $controldiv=jkpanel.$mainpanel.find('.control').css({cursor: 'wait'}) $contentdiv.load(file, '', function($){ var heightattr=isNaN(parseInt(height))? 'auto' : parseInt(height)+'px' $contentdiv.css({height: heightattr}) jkpanel.contentdivheight=parseInt($contentdiv.get(0).offsetHeight) jkpanel.$mainpanel.css({top:-jkpanel.contentdivheight+'px', visibility:'visible'}).attr('openstate', 'closed') $controldiv.css({cursor:'hand', cursor:'pointer'}) }) jkpanel.$mainpanel.click(function(){jkpanel.openclose($, speed)}) }) } } //Initialize script: jkpanel.init('path_to_content_file', 'height of content DIV in px', animation_duration) jkpanel.init('panelcontent.htm', '200px', 500) Does anybody have any idea on how to modify this to do so or even have any tips or pointers to point me in the right direction to start doing so. Cheers

    Read the article

  • How can i return IEnumarable data from function in GridView with Entity FrameWork?

    - by programmerist
    protected IEnumerable GetPersonalsData() { // List personel; using (FirmaEntities firmactx = new FirmaEntities()) { var personeldata = (from p in firmactx.Personals select new { p.ID, p.Name, p.SurName }); return personeldata.AsEnumerable(); } } i wan to send GetPersonelData() into GridView DataSource. Like That: gwPersonel.DataSource = GetPersonelData(); gwPersonel.DataBind(); it monitored to me on : gwPersonel.DataBind(); this error: "The ObjectContext instance has been disposed and can no longer be used for operations that require a connection."

    Read the article

  • RijndaelManaged Padding when data matches block size

    - by trampster
    If I use PKCS7 padding in RijndaelManaged with 16 bytes of data then I get 32 bytes of data output. It appears that for PKCS7 when the data size matches the block size it adds a whole extra block of data. If I use Zeros padding for 16 bytes of data I get out 16 bytes of data. So for Zeros padding if the data matches the block size then it doesn't pad. I have searched through the documentation and it says nothing about this difference in padding behavior. Can someone please point me to some kind of documentation which specifies what the padding behavior should be for the different padding modes when the data size matches the block size.

    Read the article

  • How do I improve the efficiency of the queries executed by this generic Linq-to-SQL data access clas

    - by Lee D
    Hi all, I have a class which provides generic access to LINQ to SQL entities, for example: class LinqProvider<T> //where T is a L2S entity class { DataContext context; public virtual IEnumerable<T> GetAll() { return context.GetTable<T>(); } public virtual T Single(Func<T, bool> condition) { return context.GetTable<T>().SingleOrDefault(condition); } } From the front end, both of these methods appear to work as you would expect. However, when I run a trace in SQL profiler, the Single method is executing what amounts to a SELECT * FROM [Table], and then returning the single entity that meets the given condition. Obviously this is inefficient, and is being caused by GetTable() returning all rows. My question is, how do I get the query executed by the Single() method to take the form SELECT * FROM [Table] WHERE [condition], rather than selecting all rows then filtering out all but one? Is it possible in this context? Any help appreciated, Lee

    Read the article

  • Run JavaScript code at ASP.NET page load

    - by vaibhav
    I have a radiobox <asp:RadioButtonList CssClass="list" Style="width: 150px" ID="rdo_RSD_ExcerciseRoT" runat="server" Font-Bold="false" RepeatDirection="Horizontal" RepeatLayout="Table" TextAlign="Left" > <asp:ListItem Text="Yes" onclick="en();" Value="Y"></asp:ListItem> <asp:ListItem Text="No" onclick="dis();" Value="N" Selected="True"></asp:ListItem> </asp:RadioButtonList> AS you may see second listitem is selected by default. But issue is, when my page is getting load dis() is not getting called. I want to run dis() on page load too. I tried google, some blogs suggest the use of Page.RegisterStartupScript Method. But I dont exactly know what is the problem and why we should use this above mentioned method. I would appreciate if someone please tell me why this function is not getting called and how to call it. Edit: I am giving Javascript code also, if it helps. <script type="text/javascript"> function dis() { ValidatorEnable(document.getElementById('<%=RequiredFieldValidator32.ClientID%>'), false); } function en() { ValidatorEnable(document.getElementById('<%=RequiredFieldValidator32.ClientID%>'), true); } </script>

    Read the article

  • How to correctly load dependent JavaScript files

    - by Vaibhav Garg
    I am trying to extent a website page that displays google maps with the LabeledMarker. Google Maps API defines a class called GMarker which is extended by the LabeledMarker. The problem is, I cant seem to load the LabeledMarker script properly, i.e. after the Google API loads and I get the 'GMarker not defined' error. What is the correct way to specify the scripts in such cases? I am using ASP.NET's ClientScript.RegisterClientScriptInclude() first for the google API url and then immediately after with the LabeledMarker script file. The initial google API loader writes further script links that load the actual GMarker class. Shouldnt all those scripts be executed before the next script block(LabeledMarker script) is processed. I have checked the generated HTML and the script blocks are emitted in the right order. <script src="google api url" type="text/javascript"></script> ... (the above scripts uses document.write() etc to append further script blocks/sources) ... <script src="Scripts/LabeledMarker.js" type="text/javascript"></script> Once again, the LabeledMarker.js seems to get executed before the google API finishes loading.

    Read the article

  • Caching Authentication Data

    - by PartlyCloudy
    Hi, I'm currently implementing a REST web service using CouchDB and RESTlet. The RESTlet layer is mainly for authentication and some minor filtering of the JSON data served by CouchDB: Clients <= HTTP = [ RESTlet <= HTTP = CouchDB ] I'm using CouchDB also to store user login data, because I don't want to add an additional database server for that purpose. Thus, each request to my service causes two CouchDB requests conducted by RESTlet (auth data + "real" request). In order to keep the service as efficent as possible, I want to reduce the number of requests, in this case redundant requests for login data. My idea now is to provide a cache (i.e.LRU-Cache via LinkedHashMap) within my RESTlet application that caches login data, because HTTP caching will probabily not be enough. But how do I invalidate the cache data, once a user changes the password, for instance. Thanks to REST, the application might run on several servers in parallel, and I don't want to create a central instance just to cache login data. Currently, I save requested auth data in the cache and try to auth new requests by using them. If a authentication fails or there is now entry available, I'll dispatch a GET request to my CouchDB storage in order to obtain the actual auth data. So in a worst case, users that have changed their data will perhaps still be able to login with their old credentials. How can I deal with that? Or what is a good strategy to keep the cache(s) up-to-date in general? Thanks in advance.

    Read the article

  • Load collections eagerly in NHibernate using Criteria API

    - by Zuber
    I have an entity A which HasMany entities B and entities C. All entities A, B and C have some references x,y and z which should be loaded eagerly. I want to read from the database all entities A, and load the collections of B and C eagerly using criteria API. So far, I am able to fetch the references in 'A' eagerly. But when the collections are loaded, the references within them are lazily loaded. Here is how I do it AllEntities_A = _session.CreateCriteria(typeof(A)) .SetFetchMode("x", FetchMode.Eager) .SetFetchMode("y", FetchMode.Eager) .List<A>().AsQueryable(); The mapping of entity A using Fluent is as shown below. _B and _C are private ILists for B & C respectively in A. Id(c => c.SystemId); Version(c => c.Version); References(c => c.x).Cascade.All(); References(c => c.y).Cascade.All(); HasMany<B>(Reveal.Property<A>("_B")) .AsBag() .Cascade.AllDeleteOrphan() .Not.LazyLoad() .Inverse() .Cache.ReadWrite().IncludeAll(); HasMany<C>(Reveal.Property<A>("_C")) .AsBag() .Cascade.AllDeleteOrphan() .LazyLoad() .Inverse() .Cache.ReadWrite().IncludeAll(); I don't want to make changes to the mapping file, and would like to load the entire entity A eagerly. i.e. I should get a List of A's where there will be List of B's and C's whose reference properties will also be loaded eagerly

    Read the article

  • Generate a list of file names based on month and year arithmetic

    - by MacUsers
    How can I list the numbers 01 to 12 (one for each of the 12 months) in such a way so that the current month always comes last where the oldest one is first. In other words, if the number is grater than the current month, it's from the previous year. e.g. 02 is Feb, 2011 (the current month right now), 03 is March, 2010 and 09 is Sep, 2010 but 01 is Jan, 2011. In this case, I'd like to have [09, 03, 01, 02]. This is what I'm doing to determine the year: for inFile in os.listdir('.'): if inFile.isdigit(): month = months[int(inFile)] if int(inFile) <= int(strftime("%m")): year = strftime("%Y") else: year = int(strftime("%Y"))-1 mnYear = month + ", " + str(year) I don't have a clue what to do next. What should I do here? Update: I think, I better upload the entire script for better understanding. #!/usr/bin/env python import os, sys from time import strftime from calendar import month_abbr vGroup = {} vo = "group_lhcb" SI00_fig = float(2.478) months = tuple(month_abbr) print "\n%-12s\t%10s\t%8s\t%10s" % ('VOs','CPU-time','CPU-time','kSI2K-hrs') print "%-12s\t%10s\t%8s\t%10s" % ('','(in Sec)','(in Hrs)','(*2.478)') print "=" * 58 for inFile in os.listdir('.'): if inFile.isdigit(): readFile = open(inFile, 'r') lines = readFile.readlines() readFile.close() month = months[int(inFile)] if int(inFile) <= int(strftime("%m")): year = strftime("%Y") else: year = int(strftime("%Y"))-1 mnYear = month + ", " + str(year) for line in lines[2:]: if line.find(vo)==0: g, i = line.split() s = vGroup.get(g, 0) vGroup[g] = s + int(i) sumHrs = ((vGroup[g]/60)/60) sumSi2k = sumHrs*SI00_fig print "%-12s\t%10s\t%8s\t%10.2f" % (mnYear,vGroup[g],sumHrs,sumSi2k) del vGroup[g] When I run the script, I get this: [root@serv07 usage]# ./test.py VOs CPU-time CPU-time kSI2K-hrs (in Sec) (in Hrs) (*2.478) ================================================== Jan, 2011 211201372 58667 145376.83 Dec, 2010 5064337 1406 3484.07 Feb, 2011 17506049 4862 12048.04 Sep, 2010 210874275 58576 145151.33 As I said in the original post, I like the result to be in this order instead: Sep, 2010 210874275 58576 145151.33 Dec, 2010 5064337 1406 3484.07 Jan, 2011 211201372 58667 145376.83 Feb, 2011 17506049 4862 12048.04 The files in the source directory reads like this: [root@serv07 usage]# ls -l total 3632 -rw-r--r-- 1 root root 1144972 Feb 9 19:23 01 -rw-r--r-- 1 root root 556630 Feb 13 09:11 02 -rw-r--r-- 1 root root 443782 Feb 11 17:23 02.bak -rw-r--r-- 1 root root 1144556 Feb 14 09:30 09 -rw-r--r-- 1 root root 370822 Feb 9 19:24 12 Did I give a better picture now? Sorry for not being very clear in the first place. Cheers!! Update @Mark Ransom This is the result from Mark's suggestion: [root@serv07 usage]# ./test.py VOs CPU-time CPU-time kSI2K-hrs (in Sec) (in Hrs) (*2.478) ========================================================== Dec, 2010 5064337 1406 3484.07 Sep, 2010 210874275 58576 145151.33 Feb, 2011 17506049 4862 12048.04 Jan, 2011 211201372 58667 145376.83 As I said before, I'm looking for the result to b printed in this order: Sep, 2010 - Dec, 2010 - Jan, 2011 - Feb, 2011 Cheers!!

    Read the article

  • Jquery Show Fields depending on Select Menu value but on page load

    - by Tim
    Hello, This question refers to the question http://stackoverflow.com/questions/835259/jquery-show-hide-fields-depening-on-select-value <select id="viewSelector"> <option value="0">-- Select a View --</option> <option value="view1">view1</option> <option value="view2">view2</option> <option value="view3">view3</option> </select> <div id="view1"> <!-- content --> </div> <div id="view2a"> <!-- content --> </div> <div id="view2b"> <!-- content --> </div> <div id="view3"> <!-- content --> </div> $(document).ready(function() { $.viewMap = { '0' : $([]), 'view1' : $('#view1'), 'view2' : $('#view2a, #view2b'), 'view3' : $('#view3') }; $('#viewSelector').change(function() { // hide all $.each($.viewMap, function() { this.hide(); }); // show current $.viewMap[$(this).val()].show(); }); }); When I select the 2nd item in the menu it shows the corresponding field. The exception to this is when the on page load the select menu already has the 2nd menu item selected, the field does not show. As you may be able to tell, I am new to jquery and could definetly use some help with adjusting this code so that the selected item's field is shown on page load. Thanks, Tim

    Read the article

  • Capturing Drupal7 DOM content before page load for comparison

    - by ehime
    We have an MU (Multisite) installation of Drupal7 here at work, and are trying to temporarily hold back the swarm of bots we receive until we get a chance to load our content. I wrote a quick and and dirty script to send 503 headers if we find a certain criteria in Xpath (This can ALSO be done as a strpos/preg_match if DOM is not formed). In order to get the ball rolling though I need to figure out how to either A) Hijack the Drupal7 bootstrap and pull all content through this filter below B) ob_flush content through the filter before content is loaded The issue that I am having is figuring out exactly where I can catch the content at? I thought that index.php in Drupal7 would be the suspect, but I'm a little confused as to where or how I should capture the contents. Here's the script, and hopefully someone can point me in the right direction. //error_reporting(-1); /* start query */ $dom = new DOMDocument; $dom->preserveWhiteSpace = false; $dom->Load($_SERVER['PHP_SELF']); $xpath = new DOMXPath($dom); //if this exists we aren't ready to be read by bots $query = $xpath->query(".//*[@id='block-views-about-this-site-block']/div/div/div"); //or $query = 'klat-badge'; //if this is a string not DOM /* end query */ if(strpos($query) !== false) { //require banlist require('botlist.php'); $str = strtolower('/'.implode('|', array_unique($list)).'/i'); if(preg_match($str, strtolower($_SERVER['HTTP_USER_AGENT']))) { //so tell bots we're broken header('HTTP/1.1 503 Service Temporarily Unavailable'); header('Status: 503 Service Temporarily Unavailable'); exit; } }

    Read the article

  • changing user in ubuntu

    - by Rahul Mehta
    Hi , this is my ls -all, the zfapi folder have the root right , how can i change this to www-data. Also Please advise what is the first root and secont root is ? Thanks drwxr-xr-x 4 www-data www-data 4096 2011-01-06 18:21 cdnapi -rw-r--r-- 1 www-data www-data 678 2010-08-30 12:02 config.js drwxr-xr-x 4 www-data www-data 4096 2010-11-23 15:55 css drwxr-xr-x 7 www-data www-data 4096 2010-11-17 13:12 images -rw-r--r-- 1 www-data www-data 25064 2010-12-17 18:26 index.html -rw-r--r-- 1 www-data www-data 19830 2010-12-18 11:24 init.js drwxr-xr-x 2 www-data www-data 4096 2010-12-02 12:34 lib -rw-r--r-- 1 www-data www-data 18758 2010-12-06 18:00 styles.css -rw-r--r-- 1 www-data www-data 1081 2010-10-21 17:56 testbganim.html drwxr-xr-x 2 www-data www-data 4096 2010-12-17 11:15 yapi drwxr-xr-x 7 root root 4096 2011-01-07 18:20 zfapi

    Read the article

  • HLSL How can one pass data between shaders / read existing colour value?

    - by RJFalconer
    Hello all, I have 2 HLSL ps2.0 shaders. Simplified, they are: Shader 1 Reads texture Outputs colour value based on this texture Shader 2 Needs to read in existing colour (or have it passed in/read from a register) Outputs the final colour which is a function of the previous colour (They need to be different shaders as I've reached the maximum vertex-shader outputs for 1 shader) My problem is I cannot work out how Shader 2 can access the existing fragment/pixel colour. Is the only way for shaders to interact really just the alpha blending options? These aren't sufficient if I want to use the colour as input to my function.

    Read the article

  • AIR: sync gui with data-base?

    - by John Isaacks
    I am going to be building an AIR application that shows a list (about 1-25 rows of data) from a data-base. The data-base is on the web. I want the list to be as accurate as possible, meaning as soon as the data-base data changes, the list displayed in the app should update asap. I do not know of anyway that the air application could be notified when there is a change, I am thinking I am going to have to poll the data-base at certain intervals to keep an up to date list. So my question is, first is there any way to NOT have to keep checking the data-base? or if I do keep have to keep checking the data-base what is a reasonable interval to do that at? Thanks.

    Read the article

  • Load Pymacs & Ropemacs only when opening a Python file ?

    - by Mtgred
    I use Pymacs to load Ropemacs and Rope with the following lines in my .emacs as described here. (autoload 'pymacs-load "pymacs" nil t) (pymacs-load "ropemacs" "rope-") It however slowdown the startup of Emacs significantly as it takes a while to load Ropemacs. I tried the following line instead but that loads Ropemacs everytime a Python file opened... (add-hook 'python-mode-hook (lambda () (pymacs-load "ropemacs" "rope-"))) Is there a way to perform the pymacs-load when opening a Python file but only if ropemacs and rope aren't loaded yet?

    Read the article

  • JDO, GAE: Load object group by child's key

    - by tiex
    I have owned one-to-many relationship between two objects: @PersistenceCapable(identityType = IdentityType.APPLICATION) public class AccessInfo { @PrimaryKey @Persistent(valueStrategy = IdGeneratorStrategy.IDENTITY) private com.google.appengine.api.datastore.Key keyInternal; ... @Persistent private PollInfo currentState; public AccessInfo(){} public AccessInfo(String key, String voter, PollInfo currentState) { this.voter = voter; this.currentState = currentState; setKey(key); // this key is unique in whole systme } public void setKey(String key) { this.keyInternal = KeyFactory.createKey( AccessInfo.class.getSimpleName(), key); } public String getKey() { return this.keyInternal.getName(); } and @PersistenceCapable(identityType = IdentityType.APPLICATION) public class PollInfo { @PrimaryKey @Persistent(valueStrategy = IdGeneratorStrategy.IDENTITY) private Key key; @Persistent(mappedBy = "currentState") private List<AccessInfo> accesses; ... I created an instance of class PollInfo and make it persistence. It is ok. But then I want to load this group by AccessInfo key, and I am getting exception NucleusObjectNotFoundException. Is it possible to load a group by child's key?

    Read the article

  • Use queried json data in a function

    - by SztupY
    I have a code similar to this: $.ajax({ success: function(data) { text = ''; for (var i = 0; i< data.length; i++) { text = text + '<a href="#" id="Data_'+ i +'">' + data[i].Name + "</a><br />"; } $("#SomeId").html(text); for (var i = 0; i< data.length; i++) { $("#Data_"+i).click(function() { alert(data[i]); RunFunction(data[i]); return false; }); } } }); This gets an array of some data in json format, then iterates through this array generating a link for each entry. Now I want to add a function for each link that will run a function that does something with this data. The problem is that the data seems to be unavailable after the ajax success function is called (although I thought that they behave like closures). What is the best way to use the queried json data later on? (I think setting it as a global variable would do the job, but I want to avoid that, mainly because this ajax request might be called multiple times) Thanks.

    Read the article

  • JQuery load help

    - by mtwallet
    Hi. I am trying to use load() to place some html into a div on a page. I have a bunch of links like this: <div id="slideshow"> <div id="slides"> <div class="projects"> <a href="work/mobus.html" title="Mobus Fabrics Website"> <img src="images/work/mobus.jpg" alt="Mobus Fabrics Website" width="280" height="100" /> </a> <a href="work/eglin.html" title="Eglin Ltd Website"> <img src="images/work/eglin.jpg" alt="Eglin Ltd Website" width="280" height="100" /> </a> <a href="work/first-brands.html" title="First Brands Website"> <img src="images/work/first-brands.jpg" alt="First Brands Website" width="280" height="100" /> </a> </div> <a id="prev"></a> <a id="next"></a> </div> and my jquery code looks like this: $('.projects a').click(function() { $('#work').load(this.href); }); The problem is when clicked the html is placed in the #work div the html is loaded in another page. Please can anyone help?

    Read the article

  • NASM - Load code from USB Drive

    - by new123456
    Hola, Would any assembly gurus know the argument (register dl) that signifies the first USB drive? I'm working through a couple of NASM tutorials, and would like to get a physical boot (I can get a clean one with qemu). This is the section of code that loads the "kernel" data from disk: loadkernel: mov si, LMSG ;; 'Loading kernel',13,10,0 call prints ;; ex puts() mov dl, 0x00 ;; The disk to load from mov ah, 0x02 ;; Read operation mov al, 0x01 ;; Sectors to read mov ch, 0x00 ;; Track mov cl, 0x02 ;; Sector mov dh, 0x00 ;; Head mov bx, 0x2000 ;; Buffer end mov es, bx mov bx, 0x0000 ;; Buffer start int 0x13 jc loadkernel mov ax, 0x2000 mov ds, ax jmp 0x2000:0x00 If it makes any difference, I'm running a stock Dell Inspiron 15 BIOS. Apparently, the correct value for me is 0x80. The BIOS loads the hard drives and labels them starting at 0x80 according to this answer. My particular BIOS decides to load the USB drive up as the first, for some reason, so I can boot from there.

    Read the article

  • XmlDocument.Load() throws XmlSchemaValidationException

    - by Praetorian
    Hi, I'm trying to validate an XML document against a schema (which is embedded in my program as a resource). I got everything to work, so I tried to test for errors by adding a second sibling node in the XML at a location where the schema specifies maxOccurs="1". The problem is that my ValidationEventHandler is never getting called, also XmlDocument.Load() is throwing an XmlSchemaValidationException exception when I'd expected XmlDocument.Validate() to do that. This is the code I have: private void ValidateUserData( string xmlPath ) { var resInfo = Application.GetResourceStream( new Uri( @"MySchema.xsd", UriKind.Relative ) ); var schema = XmlSchema.Read( resInfo.Stream, SchemaValidationCallBack ); XmlSchemaSet schemaSet = new XmlSchemaSet(); schemaSet.Add( schema ); schemaSet.ValidationEventHandler += SchemaValidationCallBack; XmlReaderSettings settings = new XmlReaderSettings(); settings.Schemas = schemaSet; settings.ValidationType = ValidationType.Schema; XmlDocument doc = new XmlDocument(); using( XmlReader reader = XmlReader.Create( xmlPath, settings ) ) { doc.Load( reader ); // <-- This line throws an exception if XML is ill-formed reader.Close(); } doc.Validate( SchemaValidationCallBack );// <-- This is never reached } private void SchemaValidationCallBack( object sender, ValidationEventArgs e ) { Console.WriteLine( "SchemaValidationCallBack: " + e.Message ); } How do I get the callback to be called so I can handle validation errors? Thanks for your help!

    Read the article

  • jquery load returns empty, possible MVC 2 problem?

    - by Max Fraser
    I have a site that need to get some data from a different sit that is using asp.net MVC/ The data to get loaded is from these pages: http://charity.hondaclassic.com/home/totaldonations http://charity.hondaclassic.com/Home/CharityList This should be a no brainer but for some reason I get an empty response, here is my JS: <script> jQuery.noConflict(); jQuery(document).ready(function($){ $('.totalDonations').load('http://charity.hondaclassic.com/home/totaldonations'); $('#charityList').load('http://charity.hondaclassic.com/home/CharityList'); }); </script> in firebug I see the request is made and come back with a response of 200 OK but the response is empty, if you browse to these pages they work fine! What the heck? Here are the controller actions from the MVC site: public ActionResult TotalDonations() { var total = "$" + repo.All<Customer>().Sum(x => x.AmountPaid).ToString(); return Content(total); } public ActionResult CharityList() { var charities = repo.All<Company>(); return View(charities); } Someone please out what stupid little thing I am missing - this should have taken me 5 minutes and it's been hours!

    Read the article

  • CSC folder data access AND roaming profiles issues (Vista with Server 2003, then 2008)

    - by Alex Jones
    I'm a junior sysadmin for an IT contractor that helps small, local government agencies, like little towns and the like. One of our clients, a public library with ~ 50 staff users, was recently migrated from Server 2003 Standard to Server 2008 R2 Standard in a very short timeframe; our senior employee, the only network engineer, had suddenly put in his two weeks notice, so management pushed him to do this project before quitting. A bit hasty on management's part? Perhaps. Could we do anything about that? Nope. Do I have to fix this all by myself? Pretty much. The network is set up like this: a) 50ish staff workstations, all running Vista Business SP2. All staff use MS Outlook, which uses RPC-over-HTTPS ("Outlook Anywhere") for cached Exchange access to an offsite location. b) One new (virtualized) Server 2008 R2 Standard instance, running atop a Server 2008 R2 host via Hyper-V. The VM is the domain's DC, and also the site's one and only file server. Let's call that VM "NEWBOX". c) One old physical Server 2003 Standard server, running the same roles. Let's call it "OLDBOX". It's still on the network and accessible, but it's been demoted, and its shares have been disabled. No data has been deleted. c) Gigabit Ethernet everywhere. The organization's only has one domain, and it did not change during the migration. d) Most users were set up for a combo of redirected folders + offline files, but some older employees who had been with the organization a long time are still on roaming profiles. To sum up: the servers in question handle user accounts and files, nothing else (eg, no TS, no mail, no IIS, etc.) I have two major problems I'm hoping you can help me with: 1) Even though all domain users have had their redirected folders moved to the new server, and loggin in to their workstations and testing confirms that the Documents/Music/Whatever folders point to the new paths, it appears some users (not laptops or anything either!) had been working offline from OLDBOX for a long time, and nobody realized it. Here's the ugly implication: a bunch of their data now lives only in their CSC folders, because they can't access the share on OLDBOX and sync with it finally. How do I get this data out of those CSC folders, and onto NEWBOX? 2) What's the best way to migrate roaming profile users to non-roaming ones, without losing vital data like documents, any lingering PSTs, etc? Things I've thought about trying: For problem 1: a) Reenable the documents share on OLDBOX, force an Offline Files sync for ALL domain users, then copy OLDBOX's share's data to the equivalent share on NEWBOX. Reinitialize the Offline Files cache for every user. With this: How do I safely force a domain-wide Offline Files sync? Could I lose data by reenabling the share on OLDBOX and forcing the sync? Afterwards, how can I reinitialize the Offline Files cache for every user, without doing it manually, workstation by workstation? b) Determine which users have unsynced changes to OLDBOX (again, how?), search each user's CSC folder domain-wide via workstation admin shares, and grab the unsynched data. Reinitialize the Offline Files cache for every user. With this: How can I detect which users have unsynched changes with a script? How can I search each user's CSC folder, when the ownership and permissions set for CSC folders are so restrictive? Again, afterwards, how can I reinitialize the Offline Files cache for every user, without doing it manually, workstation by workstation? c) Manually visit each workstation, copy the contents of the CSC folder, and manually copy that data onto NEWBOX. Reinitialize the Offline Files cache for every user. With this: Again, how do I 'break into' the CSC folder and get to its data? As an experiment, I took one workstation's HD offsite, imaged it for safety, and then tried the following with one of our shop PCs, after attaching the drive: grant myself full control of the folder (failed), grant myself ownership of the folder (failed), run chkdsk on the whole drive to make sure nothing's messed up (all OK), try to take full control of the entire drive (failed), try to take ownership of the entire drive (failed) MS KB articles and Googling around suggests there's a utility called CSCCMD that's meant for this exact scenario...but it looks like it's available for XP, not Vista, no? Again, afterwards, how can I reinitialize the Offline Files cache for every user, without doing it manually, workstation by workstation? For problem 2: a) Figure out which users are on roaming profiles, and where their profiles 'live' on the server. Create new folders for them in the redirected folders repository, migrate existing data, and disable the roaming. With this: Finding out who's roaming isn't hard. But what's the best way to disable the roaming itself? In AD Users and Computers, or on each user's workstation? Doing it centrally on the server seems more efficient; that said, all of the KB research I've done turns up articles on how to go from local to roaming, not the other way around, so I don't have good documentation on this. In closing: we have good backups of NEWBOX and OLDBOX, but not of the workstations themselves, so anything drastic on the client side would need imaging and testing for safety. Thanks for reading along this far! Hopefully you can help me dig us out of this mess.

    Read the article

< Previous Page | 184 185 186 187 188 189 190 191 192 193 194 195  | Next Page >