Search Results

Search found 7097 results on 284 pages for 'calls'.

Page 258/284 | < Previous Page | 254 255 256 257 258 259 260 261 262 263 264 265  | Next Page >

  • SQL Server CTE referred in self joins slow

    - by Kharlos Dominguez
    Hello, I have written a table-valued UDF that starts by a CTE to return a subset of the rows from a large table. There are several joins in the CTE. A couple of inner and one left join to other tables, which don't contain a lot of rows. The CTE has a where clause that returns the rows within a date range, in order to return only the rows needed. I'm then referencing this CTE in 4 self left joins, in order to build subtotals using different criterias. The query is quite complex but here is a simplified pseudo-version of it WITH DataCTE as ( SELECT [columns] FROM table INNER JOIN table2 ON [...] INNER JOIN table3 ON [...] LEFT JOIN table3 ON [...] ) SELECT [aggregates_columns of each subset] FROM DataCTE Main LEFT JOIN DataCTE BananasSubset ON [...] AND Product = 'Bananas' AND Quality = 100 LEFT JOIN DataCTE DamagedBananasSubset ON [...] AND Product = 'Bananas' AND Quality < 20 LEFT JOIN DataCTE MangosSubset ON [...] GROUP BY [ I have the feeling that SQL Server gets confused and calls the CTE for each self join, which seems confirmed by looking at the execution plan, although I confess not being an expert at reading those. I would have assumed SQL Server to be smart enough to only perform the data retrieval from the CTE only once, rather than do it several times. I have tried the same approach but rather than using a CTE to get the subset of the data, I used the same select query as in the CTE, but made it output to a temp table instead. The version referring the CTE version takes 40 seconds. The version referring the temp table takes between 1 and 2 seconds. Why isn't SQL Server smart enough to keep the CTE results in memory? I like CTEs, especially in this case as my UDF is a table-valued one, so it allowed me to keep everything in a single statement. To use a temp table, I would need to write a multi-statement table valued UDF, which I find a slightly less elegant solution. Did some of you had this kind of performance issues with CTE, and if so, how did you get them sorted? Thanks, Kharlos

    Read the article

  • Implementing a scalable and high-performing web app

    - by Christopher McCann
    I have asked a few questions on here before about various things relating to this but this is more of a consolidation question as I would like to check I have got the gist of everything. I am in the middle of developing a social media web app and although I have a lot of experience coding in Java and in PHP I am trying things a bit different this time. I have modularised each component of the application. So for example one component of the application allows users to private message each other and I have split this off into its own private messaging service. I have also created a user data service the purpose of which is to return data about the user for example their name, address, age etc etc from the database. Their is also another service, the friends service, which will work off the neo4j database to create a social graph. My reason for doing all this is to allow me up to update seperate modules when I need to - so while they mostly all run off MySQL right now I could move one to Cassandra later if I thought it approriate. The actual code of the web app is really just used for the final construction. The modules behind it dont really follow any strict REST or SOAP protocol. Basically each method on our API is turned into a PHP procedural script. This then may make calls to other back-end code which tends to be OO. The web app makes CURL requests to these pages and POSTs data to them or GETs data from them. These pages then return JSON where data is required. I'm still a little mixed up about how I actually identify which user is logged in at that moment. Do I just use sessions for that? Like if we called the get-messages.php script which equates to the getMessages() method for that user - returning all the private messages for that user - how would the back-end code know which user it is as posting the users ID to the script would not be secure. Anyone could do that and get all the messages. So I thought I would use sessions for it. Am I correct on this? Can anyone spot any other problems with what I am doing here? Thanks

    Read the article

  • How to fix the position of the button in applet

    - by user1609804
    I'm trying to make an applet that has a buttons in the right, where each button is corresponding to a certain pokemon. I already did it, but the buttons isn't fixed.they keep following the mouse. please help. This is my code: import javax.swing.*; import java.awt.image.BufferedImage; import java.io.*; import javax.imageio.ImageIO; import java.applet.*; import java.awt.event.*; import java.awt.*; public class choosePokemon extends Applet implements ActionListener { private int countPokemon; private int[] storePokemon; private int x,y; //this will be the x and y coordinate of the button BufferedImage Picture; public int getCountPokemon(){ //for other class that needs how many pokemon return countPokemon; } public int[] getStoredPokemon(){ //for other class that needs the pokemon return storePokemon; } public void init(){ x=0;y=0; try{ Picture = ImageIO.read(new File("pokeball.png")); } catch( IOException ex ){ } } public void paint( Graphics g ){ pokemon display = new pokemon(); // to access the pokemon attributes in class pokemon ButtonGroup group = new ButtonGroup(); //create a button group for( int a=0;a<16;a++ ){ // for loop in displaying the buttons of every pokemon(one pokemon, one button) display.choose( a ); //calls the method choose in which accepts an integer from 0-15 and saves the attributes of the pokemon corresponding to the integer JButton pokemonButton = new JButton( display.getName() ); // creates the button pokemonButton.setActionCommand( display.getName() ); // isasave sa actioncommand yung name ng kung ano mang pokemon pokemonButton.addActionListener(this); //isasama yung bagong gawang button sa listener para malaman kung na-click yung button pokemonButton.setBounds( x,y,50,23 ); group.add( pokemonButton ); //eto naman yung mag-aadd sa bagong gawang button sa isang group na puro buttons(button ng mga pokemon) y+=23; if( a==7 ){ x+=50; y=0; } add( pokemonButton ); //will add the button to the applet } g.drawImage( Picture, 120, 20, null ); } public void actionPerformed(ActionEvent e) { try{ //displays the picture of the selected pokemon Picture = ImageIO.read(new File( "pokemon/" + e.getActionCommand() + ".png" )); } catch( IOException ex ){ } } public boolean chosen( int PChoice ){ //this will check if the chosen pokemon is already the player's pokemon boolean flag = false; for( int x=0; x<countPokemon && !flag ;x++ ){ if( storePokemon[x]==PChoice ){ flag = true; } } return flag; }

    Read the article

  • How does the event dispatch thread work?

    - by Roman
    With the help of people on stackoverflow I was able to get the following working code of the simples GUI countdown (it just displays a window counting down seconds). My main problem with this code is the invokeLater stuff. As far as I understand the invokeLater send a task to the event dispatching thread (EDT) and then the EDT execute this task whenever it "can" (whatever it means). Is it right? To my understanding the code works like that: In the main method we use invokeLater to show the window (showGUI method). In other words, the code displaying the window will be executed in the EDT. In the main method we also start the counter and the counter (by construction) is executed in another thread (so it is not in the event dispatching thread). Right? The counter is executed in a separate thread and periodically it calls updateGUI. The updateGUI is supposed to update GUI. And GUI is working in the EDT. So, updateGUI should also be executed in the EDT. It is why the code for the updateGUI is inclosed in the invokeLater. Is it right? What is not clear to me is why we call the counter from the EDT. Anyway it is not executed in the EDT. It starts immediately a new thread and the counter is executed there. So, why we cannot call the counter in the main method after the invokeLater block? import javax.swing.JFrame; import javax.swing.JLabel; import javax.swing.SwingUtilities; public class CountdownNew { static JLabel label; // Method which defines the appearance of the window. public static void showGUI() { JFrame frame = new JFrame("Simple Countdown"); frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); label = new JLabel("Some Text"); frame.add(label); frame.pack(); frame.setVisible(true); } // Define a new thread in which the countdown is counting down. public static Thread counter = new Thread() { public void run() { for (int i=10; i>0; i=i-1) { updateGUI(i,label); try {Thread.sleep(1000);} catch(InterruptedException e) {}; } } }; // A method which updates GUI (sets a new value of JLabel). private static void updateGUI(final int i, final JLabel label) { SwingUtilities.invokeLater( new Runnable() { public void run() { label.setText("You have " + i + " seconds."); } } ); } public static void main(String[] args) { SwingUtilities.invokeLater(new Runnable() { public void run() { showGUI(); counter.start(); } }); } }

    Read the article

  • How to parse an XML string in TDI

    - by ongle
    I am new to TDI. I have a TDI assembly line that calls a web service (ibmdi.InvokeSoapWS) which returns the result as a string in the work attribute 'xmlString'. I then have an AttributeMap that attempts to parse the xml and extract a value (the node it seeks is a few nodes deep). var parser = system.getParser('ibmdi.SOAP'); var xmlString = work.getString('xmlString'); var entity = parser.parseRequest(xmlString); task.dump(entity); The trouble is, the parsed object does not contain an accurate representation of the XML. It contains only two attributes, the first is correctly the first node following the soap body (e.g., ns0:SomeNodeReply), the second being a node inside the first (e.g., ns0:DetailCount). And as far as I can determine, both attributes are just strings so I cannot recurse into the object graph. Below is a sample soap reply: <?xml version="1.0" encoding="utf-8"?> <SOAP-ENV:Envelope xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/"> <SOAP-ENV:Body> <ns0:SomeNodeReply xmlns:ns0="http://xmlns.example.com/unique/default/namespace/1136581686664"> <ns0:Status> <ns0:StatusCD>000</ns0:StatusCD> <ns0:StatusDesc /> </ns0:Status> <ns0:DetailCount>1</ns0:DetailCount> <ns0:SomeDetail> <ns0:CodeA>Foo</ns0:CodeA> <ns0:CodeB>Bar</ns0:CodeB> </ns0:SomeDetail> </ns0:SomeNodeReply> </SOAP-ENV:Body> </SOAP-ENV:Envelope> And below is a sample dump of the parsed string: 19:03:23 CTGDIS003I *** Start dumping Entry 19:03:23 Operation: generic 19:03:23 Entry attributes: 19:03:23 SOAP_CALL (replace): 'ns0:SomeNodeReply' 19:03:23 ns0:DetailCount(replace): '1' 19:03:23 CTGDIS004I *** Finished dumping Entry All I really need to do is be able to parse out a value that may or may not be there, depending on the value of another node (e.g., DetailCount == 1, get CodeA otherwise return empty string). I am open to changing anything about how this works if I can extract the data into the work Entry.

    Read the article

  • check if directory exists c#

    - by Ant
    I am trying to see if a directory exists based on an input field from the user. When the user types in the path, I want to check if the path actually exists. I have some c# code already. It returns 1 for any local path, but always returns 0 when I am checking a network path. static string checkValidPath(string path) { //Insert your code that runs under the security context of the authenticating user here. using (ImpersonateUser user = new ImpersonateUser(user, "", password)) { //DirectoryInfo d = new DirectoryInfo(quotelessPath); bool doesExist = Directory.Exists(path); //if (d.Exists) if(doesExist) { user.Dispose(); return "1"; } else { user.Dispose(); return "0"; } } } public class ImpersonateUser : IDisposable { [DllImport("advapi32.dll", SetLastError = true)] private static extern bool LogonUser(string lpszUsername, string lpszDomain, string lpszPassword, int dwLogonType, int dwLogonProvider, out IntPtr phToken); [DllImport("kernel32", SetLastError = true)] private static extern bool CloseHandle(IntPtr hObject); private IntPtr userHandle = IntPtr.Zero; private WindowsImpersonationContext impersonationContext; public ImpersonateUser(string user, string domain, string password) { if (!string.IsNullOrEmpty(user)) { // Call LogonUser to get a token for the user bool loggedOn = LogonUser(user, domain, password, 9 /*(int)LogonType.LOGON32_LOGON_NEW_CREDENTIALS*/, 3 /*(int)LogonProvider.LOGON32_PROVIDER_WINNT50*/, out userHandle); if (!loggedOn) throw new Win32Exception(Marshal.GetLastWin32Error()); // Begin impersonating the user impersonationContext = WindowsIdentity.Impersonate(userHandle); } } public void Dispose() { if (userHandle != IntPtr.Zero) CloseHandle(userHandle); if (impersonationContext != null) impersonationContext.Undo(); } } Any help is appreciated. Thanks! EDIT 3: updated code to use BrokenGlass's impersonation functions. However, I need to initialize "password" to something... EDIT 2: I updated the code to try and use impersonation as suggested below. It still fails everytime. I assume I am using impersonation improperly... EDIT: As requested by ChrisF, here is the function that calls the checkValidPath function. Frontend aspx file... $.get('processor.ashx', { a: '7', path: x }, function(o) { alert(o); if (o=="0") { $("#outputPathDivValid").dialog({ title: 'Output Path is not valid! Please enter a path that exists!', width: 500, modal: true, resizable: false, buttons: { 'Close': function() { $(this).dialog('close'); } } }); } }); Backend ashx file... public void ProcessRequest (HttpContext context) { context.Response.Cache.SetExpires(DateTime.Now); string sSid = context.Request["sid"]; switch (context.Request["a"]) {//a bunch of case statements here... case "7": context.Response.Write(checkValidPath(context.Request["path"].ToString())); break;

    Read the article

  • Is this implementation truely tail-recursive?

    - by CFP
    Hello everyone! I've come up with the following code to compute in a tail-recursive way the result of an expression such as 3 4 * 1 + cos 8 * (aka 8*cos(1+(3*4))) The code is in OCaml. I'm using a list refto emulate a stack. type token = Num of float | Fun of (float->float) | Op of (float->float->float);; let pop l = let top = (List.hd !l) in l := List.tl (!l); top;; let push x l = l := (x::!l);; let empty l = (l = []);; let pile = ref [];; let eval data = let stack = ref data in let rec _eval cont = match (pop stack) with | Num(n) -> cont n; | Fun(f) -> _eval (fun x -> cont (f x)); | Op(op) -> _eval (fun x -> cont (op x (_eval (fun y->y)))); in _eval (fun x->x) ;; eval [Fun(fun x -> x**2.); Op(fun x y -> x+.y); Num(1.); Num(3.)];; I've used continuations to ensure tail-recursion, but since my stack implements some sort of a tree, and therefore provides quite a bad interface to what should be handled as a disjoint union type, the call to my function to evaluate the left branch with an identity continuation somehow irks a little. Yet it's working perfectly, but I have the feeling than in calling the _eval (fun y->y) bit, there must be something wrong happening, since it doesn't seem that this call can replace the previous one in the stack structure... Am I misunderstanding something here? I mean, I understand that with only the first call to _eval there wouldn't be any problem optimizing the calls, but here it seems to me that evaluation the _eval (fun y->y) will require to be stacked up, and therefore will fill the stack, possibly leading to an overflow... Thanks!

    Read the article

  • Problem updating through LINQtoSQL in MVC application using StructureMap, Repository Pattern and UoW

    - by matt
    I have an ASP MVC application using LINQ to SQL for data access. I am trying to use the Repository and Unit of Work patterns, with a service layer consuming the repositories and unit of work. I am experiencing a problem when attempting to perform updates on a particular repository. My application architecture is as follows: My service class: public class MyService { private IRepositoryA _RepositoryA; private IRepositoryB _RepositoryB; private IUnitOfWork _unitOfWork; public MyService(IRepositoryA ARepositoryA, IRepositoryB ARepositoryB, IUnitOfWork AUnitOfWork) { _unitOfWork = AUnitOfWork; _RepositoryA = ARepositoryA; _RepositoryB = ARepositoryB; } public PerformActionOnObject(Guid AID) { MyObject obj = _RepositoryA.GetRecords() .WithID(AID); obj.SomeProperty = "Changed to new value"; _RepositoryA.UpdateRecord(obj); _unitOfWork.Save(); } } Repository interface: public interface IRepositoryA { IQueryable<MyObject> GetRecords(); UpdateRecord(MyObject obj); } Repository LINQtoSQL implementation: public class LINQtoSQLRepositoryA : IRepositoryA { private MyDataContext _DBContext; public LINQtoSQLRepositoryA(IUnitOfWork AUnitOfWork) { _DBConext = AUnitOfWork as MyDataContext; } public IQueryable<MyObject> GetRecords() { return from records in _DBContext.MyTable select new MyObject { ID = records.ID, SomeProperty = records.SomeProperty } } public bool UpdateRecord(MyObject AObj) { MyTableRecord record = (from u in _DB.MyTable where u.ID == AObj.ID select u).SingleOrDefault(); if (record == null) { return false; } record.SomeProperty = AObj.SomePropery; return true; } } Unit of work interface: public interface IUnitOfWork { void Save(); } Unit of work implemented in data context extension. public partial class MyDataContext : DataContext, IUnitOfWork { public void Save() { SubmitChanges(); } } StructureMap registry: public class DataServiceRegistry : Registry { public DataServiceRegistry() { // Unit of work For<IUnitOfWork>() .HttpContextScoped() .TheDefault.Is.ConstructedBy(() => new MyDataContext()); // RepositoryA For<IRepositoryA>() .Singleton() .Use<LINQtoSQLRepositoryA>(); // RepositoryB For<IRepositoryB>() .Singleton() .Use<LINQtoSQLRepositoryB>(); } } My problem is that when I call PerformActionOnObject on my service object, the update never fires any SQL. I think this is because the datacontext in the UnitofWork object is different to the one in RepositoryA where the data is changed. So when the service calls Save() on it's IUnitOfWork, the underlying datacontext does not have any updated data so no update SQL is fired. Is there something I've done wrong in the StrutureMap registry setup? Or is there a more fundamental problem with the design? Many thanks.

    Read the article

  • Instance caching in Objective C

    - by zoul
    Hello! I want to cache the instances of a certain class. The class keeps a dictionary of all its instances and when somebody requests a new instance, the class tries to satisfy the request from the cache first. There is a small problem with memory management though: The dictionary cache retains the inserted objects, so that they never get deallocated. I do want them to get deallocated, so that I had to overload the release method and when the retain count drops to one, I can remove the instance from cache and let it get deallocated. This works, but I am not comfortable mucking around the release method and find the solution overly complicated. I thought I could use some hashing class that does not retain the objects it stores. Is there such? The idea is that when the last user of a certain instance releases it, the instance would automatically disappear from the cache. NSHashTable seems to be what I am looking for, but the documentation talks about “supporting weak relationships in a garbage-collected environment.” Does it also work without garbage collection? Clarification: I cannot afford to keep the instances in memory unless somebody really needs them, that is why I want to purge the instance from the cache when the last “real” user releases it. Better solution: This was on the iPhone, I wanted to cache some textures and on the other hand I wanted to free them from memory as soon as the last real holder released them. The easier way to code this is through another class (let’s call it TextureManager). This class manages the texture instances and caches them, so that subsequent calls for texture with the same name are served from the cache. There is no need to purge the cache immediately as the last user releases the texture. We can simply keep the texture cached in memory and when the device gets short on memory, we receive the low memory warning and can purge the cache. This is a better solution, because the caching stuff does not pollute the Texture class, we do not have to mess with release and there is even a higher chance for cache hits. The TextureManager can be abstracted into a ResourceManager, so that it can cache other data, not only textures.

    Read the article

  • web service filling gridview awfully slow, as is paging/sorting

    - by nat
    Hi I am making a page which calls a web service to fill a gridview this is returning alot of data, and is horribly slow. i ran the svcutil.exe on the wsdl page and it generated me the class and config so i have a load of strongly typed objects coming back from each request to the many service functions. i am then using LINQ to loop around the objects grabbing the necessary information as i go, but for each row in the grid i need to loop around an object, and grab another list of objects (from the same request) and loop around each of them.. 1 to many parent object child one.. all of this then gets dropped into a custom datatable a row at a time.. hope that makes sense.... im not sure there is any way to speed up the initial load. but surely i should be able to page/sort alot faster than it is doing. as at the moment, it appears to be taking as long to page/sort as it is to load initially. i thought if when i first loaded i put the datasource of the grid in the session, that i could whip it out of the session to deal with paging/sorting and the like. basically it is doing the below protected void Page_Load(object sender, EventArgs e) { //init the datatable //grab the filter vars (if there are any) WebServiceObj WS = WSClient.Method(args); //fill the datatable (around and around we go) foreach (ParentObject po in WS.ReturnedObj) { var COs = from ChildObject c in WS.AnotherReturnedObj where c.whatever.equals(...) ...etc foreach(ChildObject c in COs){ myDataTable.Rows.Add(tlo.this, tlo.that, c.thisthing, c.thatthing, etc......); } } grdListing.DataSource = myDataTable; Session["dt"] = myDataTable; grdListing.DataBind(); } protected void Listing_PageIndexChanging(object sender, GridViewPageEventArgs e) { grdListing.PageIndex = e.NewPageIndex; grdListing.DataSource = Session["dt"] as DataTable; grdListing.DataBind(); } protected void Listing_Sorting(object sender, GridViewSortEventArgs e) { DataTable dt = Session["dt"] as DataTable; DataView dv = new DataView(dt); string sortDirection = " ASC"; if (e.SortDirection == SortDirection.Descending) sortDirection = " DESC"; dv.Sort = e.SortExpression + sortDirection; grdListing.DataSource = dv.ToTable(); grdListing.DataBind(); } am i doing this totally wrongly? or is the slowness just coming from the amount of data being bound in/return from the Web Service.. there are maybe 15 columns(ish) and a whole load of rows.. with more being added to the data the webservice is querying from all the time any suggestions / tips happily received thanks

    Read the article

  • Specializating a template function that takes a universal reference parameter

    - by David Stone
    How do I specialize a template function that takes a universal reference parameter? foo.hpp: template<typename T> void foo(T && t) // universal reference parameter foo.cpp template<> void foo<Class>(Class && class) { // do something complicated } Here, Class is no longer a deduced type and thus is Class exactly; it cannot possibly be Class &, so reference collapsing rules will not help me here. I could perhaps create another specialization that takes a Class & parameter (I'm not sure), but that implies duplicating all of the code contained within foo for every possible combination of rvalue / lvalue references for all parameters, which is what universal references are supposed to avoid. Is there some way to accomplish this? To be more specific about my problem in case there is a better way to solve it: I have a program that can connect to multiple game servers, and each server, for the most part, calls everything by the same name. However, they have slightly different versions for a few things. There are a few different categories that these things can be: a move, an item, etc. I have written a generic sort of "move string to move enum" set of functions for internal code to call, and my server interface code has similar functions. However, some servers have their own internal ID that they communicate with, some use strings, and some use both in different situations. Now what I want to do is make this a little more generic. I want to be able to call something like ServerNamespace::server_cast<Destination>(source). This would allow me to cast from a Move to a std::string or ServerMoveID. Internally, I may need to make a copy (or move from) because some servers require that I keep a history of messages sent. Universal references seem to be the obvious solution to this problem. The header file I'm thinking of right now would expose simply this: namespace ServerNamespace { template<typename Destination, typename Source> Destination server_cast(Source && source); } And the implementation file would define all legal conversions as template specializations.

    Read the article

  • How to use Mozilla ActiveX Control without registry

    - by Andrew McKinlay
    I've been using the IE Browser component that is part of Windows. But I'm running into problems with security settings. For example, users get security warnings on pages with Javascript. So I'm looking at using the Mozilla ActiveX control instead. It's especially nice because it has a compatible interface. It works well if I let it install the control in the registry. But my users don't always have administrator rights to install things in the registry. So I'm trying to figure out how to use the control without registry changes. I'm using DllGetClassObject to get the class factory (IID_ICLASSFACTORY) and then CoRegisterClassObject to register it. All the API calls appear to succeed. And when I create an AtlAxWin window with the CLSID, it also appears to work. But when I try to call Navigate on the AtlAxGetControl it doesn't work - the interface doesn't have Navigate. I would show the code but it's in an obscure language (Suneido) so it wouldn't mean much. An example in C or C++ would be easy for me to translate. Or an example in another dynamic language like Python or Ruby might be helpful. Obviously I'm doing something wrong. Maybe I'm passing the wrong thing to CoRegisterClassObject? The MSDN documentation isn't very clear on what to pass and I haven't found any good examples. Or if there is another approach, I'm ok with that too. Note: I'm using the AtlAxWin window class so I'm not directly creating the control and can't use this approach. Another option is registry free com with a manifest. But again, I couldn't find a good example, especially since I'm not using Visual Studio. I tried to use the MT manifest tool, but couldn't figure it out. I don't think I can use DLL redirection since that doesn't get around the registry issue AFAIK. Another possibility is using WebKit but it seems even harder to use.

    Read the article

  • log4bash: Cannot find a way to add MaxBackupIndex to this logger implementation

    - by Syffys
    I have been trying to modify this log4bash implementation but I cannot manage to make it work. Here's a sample: #!/bin/bash TRUE=1 FALSE=0 ############### Added for testing log4bash_LOG_ENABLED=$TRUE log4bash_rootLogger=$TRACE,f,s log4bash_appender_f=file log4bash_appender_f_dir=$(pwd) log4bash_appender_f_file=test.log log4bash_appender_f_roll_format=%Y%m log4bash_appender_f_roll=$TRUE log4bash_appender_f_maxBackupIndex=10 #################################### log4bash_abs(){ if [ "${1:0:1}" == "." ]; then builtin echo ${rootDir}/${1} else builtin echo ${1} fi } log4bash_check_app_dir(){ if [ "$log4bash_LOG_ENABLED" -eq $TRUE ]; then dir=$(log4bash_abs $1) if [ ! -d ${dir} ]; then #log a seperation line mkdir $dir fi fi } # Delete old log files # $1 Log directory # $2 Log filename # $3 Log filename suffix # $4 Max backup index log4bash_delete_old_files(){ ##### Added for testing builtin echo "Running log4bash_delete_old_files $@" &2 ##### if [ "$log4bash_LOG_ENABLED" -eq $TRUE ] && [ -n "$3" ] && [ "$4" -gt 0 ]; then local directory=$(log4bash_abs $1) local filename=$2 local maxBackupIndex=$4 local suffix=$(echo "${3}" | sed -re 's/[^.]/?/g') local logFileList=$(find "${directory}" -mindepth 1 -maxdepth 1 -name "${filename}${suffix}" -type f | xargs ls -1rt) local fileCnt=$(builtin echo -e "${logFileList}" | wc -l) local fileToDeleteCnt=$(($fileCnt-$maxBackupIndex)) local fileToDelete=($(builtin echo -e "${logFileList}" | head -n "${fileToDeleteCnt}" | sed ':a;N;$!ba;s/\n/ /g')) ##### Added for testing builtin echo "log4bash_delete_old_files About to start deletion ${fileToDelete[@]}" &2 ##### if [ ${fileToDeleteCnt} -gt 0 ]; then for f in "${fileToDelete[@]}"; do #### Added for testing builtin echo "Removing file ${f}" &2 #### builtin eval rm -f ${f} done fi fi } #Appender # $1 Log directory # $2 Log file # $3 Log file roll ? # $4 Appender Name log4bash_filename(){ builtin echo "Running log4bash_filename $@" &2 local format local filename log4bash_check_app_dir "${1}" if [ ${3} -eq 1 ];then local formatProp=${4}_roll_format format=${!formatProp} if [ -z ${format} ]; then format=$log4bash_appender_file_format fi local suffix=.`date "+${format}"` filename=${1}/${2}${suffix} # Old log files deletion local previousFilenameVar=int_${4}_file_previous local maxBackupIndexVar=${4}_maxBackupIndex if [ -n "${!maxBackupIndexVar}" ] && [ "${!previousFilenameVar}" != "${filename}" ]; then builtin eval export $previousFilenameVar=$filename log4bash_delete_old_files "${1}" "${2}" "${suffix}" "${!maxBackupIndexVar}" else builtin echo "log4bash_filename $previousFilenameVar = ${!previousFilenameVar}" fi else filename=${1}/${2} fi builtin echo $filename } ######################## Added for testing filename_caller(){ builtin echo "filename_caller Call $1" output=$(log4bash_abs $(log4bash_filename "${log4bash_appender_f_dir}" "${log4bash_appender_f_file}" "1" "log4bash_appender_f" )) builtin echo ${output} } #### Previous logs generation for i in {1101..1120}; do file="${log4bash_appender_f_file}.2012${i:2:3}" builtin echo "${file} $i" touch -m -t "2012${i}0000" ${log4bash_appender_f_dir}/$file done for i in {1..4}; do filename_caller $i done I expect log4bash_filename function to step into the following if only when the calculated log filename is different from the previous one: if [ -n "${!maxBackupIndexVar}" ] && [ "${!previousFilenameVar}" != "${filename}" ]; then For this scenario to apply, I'd need ${!previousFilenameVar} to be correctly set, but it's not the case, so log4bash_filename steps into this if all the time which is really not necessary... It looks like the issue is due to the following line not working properly: builtin eval export $previousFilenameVar=$filename I have a some theories to explain why: in the original code, functions are declared and exported as readonly which makes them unable to modify global variable. I removed readonly declarations in the above sample, but probleme persists. Function calls are performed in $() which should make them run into seperated shell instances so variable modified are not exported to the main shell But I cannot manage to find a workaround to this issue... Any help is appreciated, thanks in advance!

    Read the article

  • compressed archive with quick access to individual file

    - by eric.frederich
    I need to come up with a file format for new application I am writing. This file will need to hold a bunch other text files which are mostly text but can be other formats as well. Naturally, a compressed tar file seems to fit the bill. The problem is that I want to be able to retrieve some data from the file very quickly and getting just a particular file from a tar.gz file seems to take longer than it should. I am assumeing that this is because it has to decompress the entire file even though I just want one. When I have just a regular uncompressed tar file I can get that data real quick. Lets say the file I need quickly is called data.dat For example the command... tar -x data.dat -zf myfile.tar.gz ... is what takes a lot longer than I'd like. MP3 files have id3 data and jpeg files have exif data that can be read in quickly without opening the entire file. I would like my data.dat file to be available in a similar way. I was thinking that I could leave it uncompressed and seperate from the rest of the files in myfile.tar.gz I could then create a tar file of data.dat and myfile.tar.gz and then hopefully that data would be able to be retrieved faster because it is at the head of outer tar file and is uncompressed. Does this sound right?... putting a compressed tar inside of a tar file? Basically, my need is to have an archive type of file with quick access to one particular file. Tar does this just fine, but I'd also like to have that data compressed and as soon as I do that, I no longer have quick access. Are there other archive formats that will give me that quick access I need? As a side note, this application will be written in Python. If the solution calls for a re-invention of the wheel with my own binary format I am familiar with C and would have no problem writing the Python module in C. Idealy I'd just use tar, dd, cat, gzip, etc though. Thanks, ~Eric

    Read the article

  • How would you implement this "WorkerChain" functionality in .NET?

    - by Dan Tao
    Sorry for the vague question title -- not sure how to encapsulate what I'm asking below succinctly. (If someone with editing privileges can think of a more descriptive title, feel free to change it.) The behavior I need is this. I am envisioning a worker class that accepts a single delegate task in its constructor (for simplicity, I would make it immutable -- no more tasks can be added after instantiation). I'll call this task T. The class should have a simple method, something like GetToWork, that will exhibit this behavior: If the worker is not currently running T, then it will start doing so right now. If the worker is currently running T, then once it is finished, it will start T again immediately. GetToWork can be called any number of times while the worker is running T; the simple rule is that, during any execution of T, if GetToWork was called at least once, T will run again upon completion (and then if GetToWork is called while T is running that time, it will repeat itself again, etc.). Now, this is pretty straightforward with a boolean switch. But this class needs to be thread-safe, by which I mean, steps 1 and 2 above need to comprise atomic operations (at least I think they do). There is an added layer of complexity. I have need of a "worker chain" class that will consist of many of these workers linked together. As soon as the first worker completes, it essentially calls GetToWork on the worker after it; meanwhile, if its own GetToWork has been called, it restarts itself as well. Logically calling GetToWork on the chain is essentially the same as calling GetToWork on the first worker in the chain (I would fully intend that the chain's workers not be publicly accessible). One way to imagine how this hypothetical "worker chain" would behave is by comparing it to a team in a relay race. Suppose there are four runners, W1 through W4, and let the chain be called C. If I call C.StartWork(), what should happen is this: If W1 is at his starting point (i.e., doing nothing), he will start running towards W2. If W1 is already running towards W2 (i.e., executing his task), then once he reaches W2, he will signal to W2 to get started, immediately return to his starting point and, since StartWork has been called, start running towards W2 again. When W1 reaches W2's starting point, he'll immediately return to his own starting point. If W2 is just sitting around, he'll start running immediately towards W3. If W2 is already off running towards W3, then W2 will simply go again once he's reached W3 and returned to his starting point. The above is probably a little convoluted and written out poorly. But hopefully you get the basic idea. Obviously, these workers will be running on their own threads. Also, I guess it's possible this functionality already exists somewhere? If that's the case, definitely let me know!

    Read the article

  • How to implement a caching model without violating MVC pattern?

    - by RPM1984
    Hi Guys, I have an ASP.NET MVC 3 (Razor) Web Application, with a particular page which is highly database intensive, and user experience is of the upmost priority. Thus, i am introducing caching on this particular page. I'm trying to figure out a way to implement this caching pattern whilst keeping my controller thin, like it currently is without caching: public PartialViewResult GetLocationStuff(SearchPreferences searchPreferences) { var results = _locationService.FindStuffByCriteria(searchPreferences); return PartialView("SearchResults", results); } As you can see, the controller is very thin, as it should be. It doesn't care about how/where it is getting it's info from - that is the job of the service. A couple of notes on the flow of control: Controllers get DI'ed a particular Service, depending on it's area. In this example, this controller get's a LocationService Services call through to an IQueryable<T> Repository and materialize results into T or ICollection<T>. How i want to implement caching: I can't use Output Caching - for a few reasons. First of all, this action method is invoked from the client-side (jQuery/AJAX), via [HttpPost], which according to HTTP standards should not be cached as a request. Secondly, i don't want to cache purely based on the HTTP request arguments - the cache logic is a lot more complicated than that - there is actually two-level caching going on. As i hint to above, i need to use regular data-caching, e.g Cache["somekey"] = someObj;. I don't want to implement a generic caching mechanism where all calls via the service go through the cache first - i only want caching on this particular action method. First thought's would tell me to create another service (which inherits LocationService), and provide the caching workflow there (check cache first, if not there call db, add to cache, return result). That has two problems: The services are basic Class Libraries - no references to anything extra. I would need to add a reference to System.Web here. I would have to access the HTTP Context outside of the web application, which is considered bad practice, not only for testability, but in general - right? I also thought about using the Models folder in the Web Application (which i currently use only for ViewModels), but having a cache service in a models folder just doesn't sound right. So - any ideas? Is there a MVC-specific thing (like Action Filter's, for example) i can use here? General advice/tips would be greatly appreciated.

    Read the article

  • Linux, GNU GCC, ld, version scripts and the ELF binary format -- How does it work??

    - by themoondothshine
    Hey all, I'm trying to learn more about library versioning in Linux and how to put it all to work. Here's the context: -- I have two versions of a dynamic library which expose the same set of interfaces, say libsome1.so and libsome2.so. -- An application is linked against libsome1.so. -- This application uses libdl.so to dynamically load another module, say libmagic.so. -- Now libmagic.so is linked against libsome2.so. Obviously, without using linker scripts to hide symbols in libmagic.so, at run-time all calls to interfaces in libsome2.so are resolved to libsome1.so. This can be confirmed by checking the value returned by libVersion() against the value of the macro LIB_VERSION. -- So I try next to compile and link libmagic.so with a linker script which hides all symbols except 3 which are defined in libmagic.so and are exported by it. This works... Or at least libVersion() and LIB_VERSION values match (and it reports version 2 not 1). -- However, when some data structures are serialized to disk, I noticed some corruption. In the application's directory if I delete libsome1.so and create a soft link in its place to point to libsome2.so, everything works as expected and the same corruption does not happen. I can't help but think that this may be caused due to some conflict in the run-time linker's resolution of symbols. I've tried many things, like trying to link libsome2.so so that all symbols are alised to symbol@@VER_2 (which I am still confused about because the command nm -CD libsome2.so still lists symbols as symbol and not symbol@@VER_2)... Nothing seems to work!!! Help!!!!!! Edit: I should have mentioned it earlier, but the app in question is Firefox, and libsome1.so is libsqlite3.so shipped with it. I don't quite have the option of recompiling them. Also, using version scripts to hide symbols seems to be the only solution right now. So what really happens when symbols are hidden? Do they become 'local' to the SO? Does rtld have no knowledge of their existence? What happens when an exported function refers to a hidden symbol?

    Read the article

  • ASP.Net MVC + Live validation - how come the flagged text are all over the place?

    - by melaos
    hi guys, this is an asp.net mvc project and <% using (Html.BeginForm("ProductAdded", "Home")) { % Register Your Product <%= ViewData["MainHeader"]% <p><%=ViewData["IntroText"]%></p> <div style="display: none;"> <div id="regionThreePane"> <table border="0" cellpadding="0" cellspacing="0" frame="void" style="width: 100%"> <tr> <td width='250px'><select name="ProdLBox1" id="ProdLBox1" class="ProdLBox1" size="8"></select></td> <td width='250px'><select name="ProdLBox2" id="ProdLBox2" class="ProdLBox2" size="8"></select></td> <td width='250px'><select name="ProdLBox3" id="ProdLBox3" class="ProdLBox3" size="8"></select></td> </tr> </table> </div> i'm using live validation for my client side validation. var v_fname = new LiveValidation('Customer_FirstName', { validMessage: " " }, { onlyOnSubmit: true }); v_fname.add(Validate.Presence, { failureMessage: enterfirstname}); var v_lname = new LiveValidation('Customer_LastName', { validMessage: " " }); v_lname.add(Validate.Presence, { failureMessage: enterlastname }); var v_email = new LiveValidation('Customer_Email', { validMessage: " " }); v_email.add(Validate.Presence, { failureMessage: enteremail, validMessage: " " }); v_email.add(Validate.Email, { failureMessage: entervalidemail}); and what i notice is that after doing some button call: $(".btnAddProduct").click(function() { //Check first to see if there's anything to be added if (parseFloat($(".tboAddProduct").val()) < 1) { //TO DO: to replace with localized text var selectProductError = "Please select a product first"; $("#validationSummary").text(selectProductError); //alert("Please select a product first"); return false; } $(".PanelProductReg").show(); addProductRow($(".tboAddProductId").val(), $("#tboAddProduct").val()); }); what will happen is that the validation tags will start to appear for the whole page for all the input which are tag for the live validation. instead of just appearing when the controls are being higlighted and onblur. i'm using some ajax calls to get data and a lot of jquery to dynamically do the gui stuff. could any of this be causing some sort of an internal conflict? thanks

    Read the article

  • Template function as a template argument

    - by Kos
    I've just got confused how to implement something in a generic way in C++. It's a bit convoluted, so let me explain step by step. Consider such code: void a(int) { // do something } void b(int) { // something else } void function1() { a(123); a(456); } void function2() { b(123); b(456); } void test() { function1(); function2(); } It's easily noticable that function1 and function2 do the same, with the only different part being the internal function. Therefore, I want to make function generic to avoid code redundancy. I can do it using function pointers or templates. Let me choose the latter for now. My thinking is that it's better since the compiler will surely be able to inline the functions - am I correct? Can compilers still inline the calls if they are made via function pointers? This is a side-question. OK, back to the original point... A solution with templates: void a(int) { // do something } void b(int) { // something else } template<void (*param)(int) > void function() { param(123); param(456); } void test() { function<a>(); function<b>(); } All OK. But I'm running into a problem: Can I still do that if a and b are generics themselves? template<typename T> void a(T t) { // do something } template<typename T> void b(T t) { // something else } template< ...param... > // ??? void function() { param<SomeType>(someobj); param<AnotherType>(someotherobj); } void test() { function<a>(); function<b>(); } I know that a template parameter can be one of: a type, a template type, a value of a type. None of those seems to cover my situation. My main question is hence: How do I solve that, i.e. define function() in the last example? (Yes, function pointers seem to be a workaround in this exact case - provided they can also be inlined - but I'm looking for a general solution for this class of problems).

    Read the article

  • 1180: Call to a possibly undefined method addEventListener

    - by Chris
    I'm going through some AS3 training, but I'm getting a weird error... I'm trying to add an event listener to the end of a motion tween in AS. I've created a tween, highlighted the frames, right clicked and copied the tween as AS and pasted it into the movie clip (I think there's a better way to do this, but I'm not sure what it is...) When I try to add the listener to the end of that code, I get the error. Here's my code. import fl.motion.AnimatorFactory; import fl.motion.MotionBase; import fl.motion.Motion; import flash.filters.*; import flash.geom.Point; import fl.motion.MotionEvent; import fl.events.*; var __motion_Enemy_3:MotionBase; if(__motion_Enemy_3 == null) { __motion_Enemy_3 = new Motion(); __motion_Enemy_3.duration = 30; // Call overrideTargetTransform to prevent the scale, skew, // or rotation values from being made relative to the target // object's original transform. // __motion_Enemy_3.overrideTargetTransform(); // The following calls to addPropertyArray assign data values // for each tweened property. There is one value in the Array // for every frame in the tween, or fewer if the last value // remains the same for the rest of the frames. __motion_Enemy_3.addPropertyArray("x", [0]); __motion_Enemy_3.addPropertyArray("y", [0]); __motion_Enemy_3.addPropertyArray("scaleX", [1.000000,1.048712,1.097424,1.146136,1.194847,1.243559,1.292271,1.340983,1.389695,1.438407,1.487118,1.535830,1.584542,1.633254,1.681966,1.730678,1.779389,1.828101,1.876813,1.925525,1.974237,2.022949,2.071661,2.120372,2.169084,2.217796,2.266508,2.315220,2.363932,2.412643]); __motion_Enemy_3.addPropertyArray("scaleY", [1.000000,1.048712,1.097424,1.146136,1.194847,1.243559,1.292271,1.340983,1.389695,1.438407,1.487118,1.535830,1.584542,1.633254,1.681966,1.730678,1.779389,1.828101,1.876813,1.925525,1.974237,2.022949,2.071661,2.120372,2.169084,2.217796,2.266508,2.315220,2.363932,2.412643]); __motion_Enemy_3.addPropertyArray("skewX", [0]); __motion_Enemy_3.addPropertyArray("skewY", [0]); __motion_Enemy_3.addPropertyArray("rotationConcat", [0]); __motion_Enemy_3.addPropertyArray("blendMode", ["normal"]); __motion_Enemy_3.addPropertyArray("cacheAsBitmap", [false]); __motion_Enemy_3.addEventListener(MotionEvent.MOTION_END, hurtPlayer); // Create an AnimatorFactory instance, which will manage // targets for its corresponding Motion. var __animFactory_Enemy_3:AnimatorFactory = new AnimatorFactory(__motion_Enemy_3); __animFactory_Enemy_3.transformationPoint = new Point(0.499558, 0.500000); // Call the addTarget function on the AnimatorFactory // instance to target a DisplayObject with this Motion. // The second parameter is the number of times the animation // will play - the default value of 0 means it will loop. // __animFactory_Enemy_3.addTarget(<instance name goes here>, 0); } function hurtPlayer(event:MotionEvent):void { this.parent.removeChild(this); } I've tried a few places for it, both with the animFactory_Enemy_3 variable and the motion_Enemy_3 variable - getting the same error both times.

    Read the article

  • What to Expect in Rails 4

    - by mikhailov
    Rails 4 is nearly there, we should be ready before it released. Most developers are trying hard to keep their application on the edge. Must see resources: 1) @sikachu talk: What to Expect in Rails 4.0 - YouTube 2) Rails Guides release notes: http://edgeguides.rubyonrails.org/4_0_release_notes.html There is a mix of all major changes down here: ActionMailer changes excerpt: Asynchronously send messages via the Rails Raise an ActionView::MissingTemplate exception when no implicit template could be found ActionPack changes excerpt Added controller-level etag additions that will be part of the action etag computation Add automatic template digests to all CacheHelper#cache calls (originally spiked in the cache_digests plugin) Add Routing Concerns to declare common routes that can be reused inside others resources and routes Added ActionController::Live. Mix it in to your controller and you can stream data to the client live truncate now always returns an escaped HTML-safe string. The option :escape can be used as false to not escape the result Added ActionDispatch::SSL middleware that when included force all the requests to be under HTTPS protocol ActiveModel changes excerpt AM::Validation#validates ability to pass custom exception to :strict option Changed `AM::Serializers::JSON.include_root_in_json' default value to false. Now, AM Serializers and AR objects have the same default behaviour Added ActiveModel::Model, a mixin to make Ruby objects work with AP out of box Trim down Active Model API by removing valid? and errors.full_messages ActiveRecord changes excerpt Use native mysqldump command instead of structure_dump method when dumping the database structure to a sql file. Attribute predicate methods, such as article.title?, will now raise ActiveModel::MissingAttributeError if the attribute being queried for truthiness was not read from the database, instead of just returning false ActiveRecord::SessionStore has been extracted from Active Record as activerecord-session_store gem. Please read the README.md file on the gem for the usage Fix reset_counters when there are multiple belongs_to association with the same foreign key and one of them have a counter cache Raise ArgumentError if list of attributes to change is empty in update_all Add Relation#load. This method explicitly loads the records and then returns self Deprecated most of the 'dynamic finder' methods. All dynamic methods except for find_by_... and find_by_...! are deprecated Added ability to ActiveRecord::Relation#from to accept other ActiveRecord::Relation objects Remove IdentityMap ActiveSupport changes excerpt ERB::Util.html_escape now escapes single quotes ActiveSupport::Callbacks: deprecate monkey patch of object callbacks Replace deprecated memcache-client gem with dalli in ActiveSupport::Cache::MemCacheStore Object#try will now return nil instead of raise a NoMethodError if the receiving object does not implement the method, but you can still get the old behavior by using the new Object#try! Object#try can't call private methods Add ActiveSupport::Deprecations.behavior = :silence to completely ignore Rails runtime deprecations What are the most important changes for you?

    Read the article

  • asp.net mvc radio button state

    - by Josh Bush
    I'm trying out asp.net mvc for a new project, and I ran across something odd. When I use the MVC UI helpers for textboxes, the values get persisted between calls. But, when I use a series of radio buttons, the checked state doesn't get persisted. Here's an example from my view. <li> <%=Html.RadioButton("providerType","1")%><label>Hospital</label> <%=Html.RadioButton("providerType","2")%><label>Facility</label> <%=Html.RadioButton("providerType","3")%><label>Physician</label> </li> When the form gets posted back, I build up an object with "ProviderType" as one of it's properties. The value on the object is getting set, and then I RedirectToAction with the provider as a argument. All is well, and I end up at a URL like "http://localhost/Provider/List?ProviderType=1" with ProviderType showing. The value gets persisted to the URL, but the UI helper isn't picking up the checked state. I'm having this problem with listbox, dropdownlist, and radiobutton. Textboxes pick up the values just fine. Do you see something I'm doing wrong? I'm assuming that the helpers will do this for me, but maybe I'll just have to take care of this on my own. I'm just feeling my way through this, so your input is appreciated. Edit: I just found the override for the SelectList constructor that takes a selected value. That took care of my dropdown issue I mentioned above. Edit #2: I found something that works, but it pains me to do it this way. I feel like this should be inferred. <li> <%=Html.RadioButton("ProviderType","1",Request["ProviderType"]=="1")%><label>Hospital</label> <%=Html.RadioButton("ProviderType", "2", Request["ProviderType"] == "2")%><label>Facility</label> <%=Html.RadioButton("ProviderType", "3", Request["ProviderType"] == "3")%><label>Physician</label> </li> Hopefully someone will come up with another way.

    Read the article

  • App losing db connection

    - by DaveKub
    I'm having a weird issue with an old Delphi app losing it's database connection. Actually, I think it's losing something else that then makes the connection either drop or be unusable. The app is written in Delphi 6 and uses the Direct Oracle Access component (v4.0.7.1) to connect to an Oracle 9i database. The app runs as a service and periodically queries the db using a TOracleQuery object (qryAlarmList). The method that is called to do this looks like this: procedure TdmMain.RefreshAlarmList; begin try qryAlarmList.Execute; except on E: Exception do begin FStatus := ssError; EventLog.LogError(-1, 'TdmMain.RefreshAlarmList', 'Message: ' + E.Message); end; end; end; It had been running fine for years, until a couple of Perl scripts were added to this machine. These scripts run every 15 minutes and look for datafiles to import into the db, and then they do a some calculations and a bunch of reads/writes to/from the db. For some reason, when they are processing large amounts of data, and then the Delphi app tries to query the db, the Delphi app throws an exception at the "qryAlarmList.Execute" line in the above code listing. The exception is always: Access violation at address 00000000. read of address 00000000 HOW can something that the Perl scripts are doing cause this?? There are other Perl scripts on this machine that load data using the same modules and method calls and we didn't have problems. To make it even weirder, there are two other apps that will also suddenly lose their ability to talk to the database at the same time as the Perl stuff is running. Neither of those apps run on this machine, but both are Delphi 6 apps that use the same DOA component to connect to the same database. We have other apps that connect to the same db, written in Java or C# and they don't seem to have any problems. I've tried adding code before the '.Execute' method is called to: check the session's connection (session.CheckConnection(true); always comes back as 'ccOK'). see whether I can access a field of the qryAlarmList object to see if maybe it's become null; can access it fine. check the state of the qryAlarmList; always says it's qsIdle. Does anyone have any suggestions of something to try? This is driving me nuts! Dave

    Read the article

  • UpdatePanel Full Postback

    - by Korivo
    Greetings, here is the scenario. I have and .aspx page with and updatepanel like this <asp:UpdatePanel id="uPanelMain" runat="server"> <ContentTemplate> <uc:Calendar id="ucCalendar" runat="server" Visible="true" /> <uc:Scoring id="ucScoring" runat="server" Visible="false" /> </ContentTemplate> The control ucCalendar is loaded first and it contains a grid like this <asp:DataGrid CssClass="grid" ID="gridGames" runat="server" AutoGenerateColumns="False" HeaderStyle-CssClass="gridHeader" ItemStyle-CssClass="gridScoringRow" GridLines="None" ItemStyle-BackColor="#EEEEEE" AlternatingItemStyle-BackColor="#F5F5F5" OnEditCommand="doScoreGame" OnDeleteCommand="doEditGame" OnCancelCommand="printLineup" OnItemDataBound="gridDataBound"> <Columns> <asp:TemplateColumn > <ItemTemplate> <asp:CheckBox ID="chkDelete" runat="server" /> </ItemTemplate> </asp:TemplateColumn> <asp:BoundColumn DataField="idGame" Visible="false" /> <asp:BoundColumn DataField="isClose" Visible="false" /> <asp:TemplateColumn HeaderText="Status"> <ItemTemplate> <asp:Image ID="imgStatus" runat="server" ImageUrl="~/img/icoX.png" alt="icoStatus" /> </ItemTemplate> </asp:TemplateColumn> <asp:TemplateColumn> <ItemTemplate> <asp:LinkButton ID="linkScore" runat="server" CommandName="Edit" Text="Score" /> </ItemTemplate> </asp:TemplateColumn> </Columns> </asp:DataGrid> So when i click the "linkButton", the codebehind of the userControl calls a public method in the .aspx as this: From the userControl protected void doScoreGame(object sender, DataGridCommandEventArgs e) { ((GM)this.Page).showScoring(null, null); } From the .aspx page public void showScoring(object sender, EventArgs e) { removeLastLoadedControl(); ucScoring.Visible = true; } So, here comes the problem: There are two postbacks taking place when I change the visible attribute of the ucScoring control. The first postback is fine, it's handled by the updatePanel. The second postback is a full postback, and i really don't understand why it is happening. I'm really lost here, please help! Thanks Mat

    Read the article

  • How can I improve the recursion capabilities of my ECMAScript implementation?

    - by ChaosPandion
    After some resent tests I have found my implementation cannot handle very much recursion. Although after I ran a few tests in Firefox I found that this may be more common than I originally thought. I believe the basic problem is that my implementation requires 3 calls to make a function call. The first call is made to a method named Call that makes sure the call is being made to a callable object and gets the value of any arguments that are references. The second call is made to a method named Call which is defined in the ICallable interface. This method creates the new execution context and builds the lambda expression if it has not been created. The final call is made to the lambda that the function object encapsulates. Clearly making a function call is quite heavy but I am sure that with a little bit of tweaking I can make recursion a viable tool when using this implementation. public static object Call(ExecutionContext context, object value, object[] args) { var func = Reference.GetValue(value) as ICallable; if (func == null) { throw new TypeException(); } if (args != null && args.Length > 0) { for (int i = 0; i < args.Length; i++) { args[i] = Reference.GetValue(args[i]); } } var reference = value as Reference; if (reference != null) { if (reference.IsProperty) { return func.Call(reference.Value, args); } else { return func.Call(((EnviromentRecord)reference.Value).ImplicitThisValue(), args); } } return func.Call(Undefined.Value, args); } public object Call(object thisObject, object[] arguments) { var lexicalEnviroment = Scope.NewDeclarativeEnviroment(); var variableEnviroment = Scope.NewDeclarativeEnviroment(); var thisBinding = thisObject ?? Engine.GlobalEnviroment.GlobalObject; var newContext = new ExecutionContext(Engine, lexicalEnviroment, variableEnviroment, thisBinding); Engine.EnterContext(newContext); var result = Function.Value(newContext, arguments); Engine.LeaveContext(); return result; }

    Read the article

< Previous Page | 254 255 256 257 258 259 260 261 262 263 264 265  | Next Page >