Search Results

Search found 12562 results on 503 pages for 'secure delete'.

Page 431/503 | < Previous Page | 427 428 429 430 431 432 433 434 435 436 437 438  | Next Page >

  • Visual Studio 2010 / ASP.NET MVC / Publish

    - by SevenCentral
    I just did a clean install on Windows 7 x64 Professional with the final release of Visual Studio 2010 Premium. In order to duplicate what I'm experiencing do the following in: Create a new ASP.NET MVC 2 Web Application Right click the project and select Properties On the Web tab, select "Use Local IIS Web Server" Click on Create Virtual Directory Save all Unload the project Edit the project file Change MvcBuildViews to true Save all Reload project Right click the project and select Publish Choose the file system publish method Enter a target location Choose Delete all existing files Select Publish Right click the project Select Publish Each time I do the above I get the following errror: "It is an error to use a section registered as allowDefinition='MachineToApplication' beyond application level..." The error originates from obj\debug\package\packagetmp\web.config, relative to the project directory. I can repeat this all day long with any MVC 2 project I've built. In order to fix this problem, I need to set MvcBuildViews to false in the project file. That's not really an option. This wasn't a problem in Visual Studio 2008 and it seems to be an issue with the way the Publish command stages files beneath the project directory. Can anyone else duplicate this error? Is this a bug or by design? Is there a fix, workaround, etc...? Thanks.

    Read the article

  • Changing save directory

    - by Nathan
    Disregard my previous pre-edit post. Rethought what I needed to do. This is what I'm currently doing: https://developer.mozilla.org/en/XPCOM_Interface_Reference/nsIScriptableIO downloadFile: function(httpLoc) { try { //new obj_URI object var obj_URI = Cc["@mozilla.org/network/io-service;1"].getService(Ci.nsIIOService). newURI(httpLoc, null, null); //new file object //set file with path obj_TargetFile.initWithPath("C:\\javascript\\cache\\test.pdf"); //if file doesn't exist, create if(!obj_TargetFile.exists()) { obj_TargetFile.create(0x00,0644); } //new persitence object var obj_Persist = Cc["@mozilla.org/embedding/browser/nsWebBrowserPersist;1"]. createInstance(Ci.nsIWebBrowserPersist); // with persist flags if desired const nsIWBP = Ci.nsIWebBrowserPersist; const flags = nsIWBP.PERSIST_FLAGS_REPLACE_EXISTING_FILES; obj_Persist.persistFlags = flags | nsIWBP.PERSIST_FLAGS_FROM_CACHE; //save file to target obj_Persist.saveURI(obj_URI,null,null,null,null,obj_TargetFile); return true; } catch (e) { alert(e); } },//end downloadFile As you can see the directory is hardcoded in there, I want to save the pdf file open in the active tab to a relative temporary directory, or anywhere that's out of the way enough that the user won't stumble along and delete it. I'm going to try that using File I/O, I was under the impression that what I was looking for was in scriptable file I/O and thus disabled.

    Read the article

  • database schema eligible for delta synchronization

    - by WilliamLou
    it's a question for discussion only. Right now, I need to re-design a mysql database table. Basically, this table contains all the contract records I synchronized from another database. The contract record can be modified, deleted or users can add new contract records via GUI interface. At this stage, the table structure is exactly the same as the Contract info (column: serial number, expiry date etc.). In that case, I can only synchronize the whole table (delete all old records, replace with new ones). If I want to delta(only synchronize with modified, new, deleted records) synchronize the table, how should I change the database schema? here is the method I come up with, but I need your suggestions because I think it's a common scenario in database applications. 1)introduce a sequence number concept/column: for each sequence, mark the new added records, modified records, deleted records with this sequence number. By recording the last synchronized sequence number, only pass those records with higher sequence number; 2) because deleted contracts can be added back, and the original table has primary key constraints, should I create another table for those deleted records? or add a flag column to indicate if this contract has been deleted? I hope I explain my question clearly. Anyway, if you know any articles or your own suggestions about this, please let me know. Thanks!

    Read the article

  • Why does Hibernate 2nd level cache only cache within a session?

    - by Synesso
    Using a named query in our application and with ehcache as the provider, it seems that the query results are tied to the session. Any attempt to access the value from the cache for a second time results in a LazyInitializationException We have set lazy=true for the following mapping because this object is also used by another part of the system which does not require the reference... and we want to keep it lean. <class name="domain.ReferenceAdPoint" table="ad_point" mutable="false" lazy="false"> <cache usage="read-only"/> <id name="code" type="long" column="ad_point_id"> <generator class="assigned" /> </id> <property name="name" column="ad_point_description" type="string"/> <set name="synonyms" table="ad_point_synonym" cascade="all-delete-orphan" lazy="true"> <cache usage="read-only"/> <key column="ad_point_id" /> <element type="string" column="synonym_description" /> </set> </class> <query name="find.adpoints.by.heading">from ReferenceAdPoint adpoint left outer join fetch adpoint.synonyms where adpoint.adPointField.headingCode = ?</query> Here's a snippet from our hibernate.cfg.xml <property name="hibernate.cache.provider_class">net.sf.ehcache.hibernate.SingletonEhCacheProvider</property> <property name="hibernate.cache.use_query_cache">true</property> It doesn't seem to make sense that the cache would be constrained to the session. Why are the cached queries not usable outside of the (relatively short-lived) sessions?

    Read the article

  • vector::erase with pointer member

    - by matt
    I am manipulating vectors of objects defined as follow: class Hyp{ public: int x; int y; double wFactor; double hFactor; char shapeNum; double* visibleShape; int xmin, xmax, ymin, ymax; Hyp(int xx, int yy, double ww, double hh, char s): x(xx), y(yy), wFactor(ww), hFactor(hh), shapeNum(s) {visibleShape=0;shapeNum=-1;}; //Copy constructor necessary for support of vector::push_back() with visibleShape Hyp(const Hyp &other) { x = other.x; y = other.y; wFactor = other.wFactor; hFactor = other.hFactor; shapeNum = other.shapeNum; xmin = other.xmin; xmax = other.xmax; ymin = other.ymin; ymax = other.ymax; int visShapeSize = (xmax-xmin+1)*(ymax-ymin+1); visibleShape = new double[visShapeSize]; for (int ind=0; ind<visShapeSize; ind++) { visibleShape[ind] = other.visibleShape[ind]; } }; ~Hyp(){delete[] visibleShape;}; }; When I create a Hyp object, allocate/write memory to visibleShape and add the object to a vector with vector::push_back, everything works as expected: the data pointed by visibleShape is copied using the copy-constructor. But when I use vector::erase to remove a Hyp from the vector, the other elements are moved correctly EXCEPT the pointer members visibleShape that are now pointing to wrong addresses! How to avoid this problem? Am I missing something?

    Read the article

  • Should frontend and backend handled by different controllers?

    - by DR
    In my previous learning projects I always used a single controller, but know I wonder if that is good practice or even always possible. In all RESTful Rails tutorials the controllers have a show, an edit and an index view. If an authorized user is logged on, the edit view becomes available and the index view shows additional data manipulation controls, like a delete button or a link to the edit view. Now I have a Rails application which falls exactly into this pattern, but the index view is not reusable: The normal user sees a flashy index page with lots of pictures, complex layout, no Javascript requirement, ... The Admin user index has a completly different minimalistic design, jQuery table and lots of additional data, ... Now I'm not sure how to handle this case. I can think of the following: Single controller, single view: The view is split into two large blocks/partials using an if statement. Single controller, two views: index and index_admin. Two different controllers: BookController and BookAdminController None of this solutions seems perfect, but for now I'm inclined to use the 3rd option. What's the preferred way to do this?

    Read the article

  • AS3 using PrintJob to print a MovieClip

    - by Chris Waugh
    Hello, I am currently trying to create a function which will allow me to pass in a movieclip and print it. Here is the simplified version of the function: function printMovieClip(clip:MovieClip) { var printJob:PrintJob = new PrintJob(); var numPages:int = 0; var printY:int = 0; var printHeight:Number; if ( printJob.start() ) { /* Resize movie clip to fit within page width */ if (clip.width > printJob.pageWidth) { clip.width = printJob.pageWidth; clip.scaleY = clip.scaleX; } numPages = Math.ceil(clip.height / printJob.pageHeight); /* Add pages to print job */ for (var i:int = 0; i < numPages; i++) { printJob.addPage(clip, new Rectangle(0, printY, printJob.pageWidth, printJob.pageHeight)); printY += printJob.pageHeight; } /* Send print job to printer */ printJob.send(); /* Delete job from memory */ printJob = null; } } printMovieClip( testMC ); Unfortunately this is not working as expected i.e. printing the full width of the Movieclip and doing page breaks on the length. Any help with this would be greatly appreciated. Many thanks, Chris

    Read the article

  • Sql Server Backup and move backup file: How to cope with file permissions?

    - by Stefan Steinegger
    With our product we have a simple backup tool for the sql server database. This tool should just make a full backup and restore to and from any folder. Of course, the user (usually an administrator) needs permission to write to the target folder. To avoid the problem of not being able to perform a backup to a network drive, I write the backup to a temp file in the Sql Server backup directory. Then I move it to the target folder. This requires permission to delete the temporary file from the sql servers backup folder. Restore is the same in the other direction. This seemed to work fine until someone tested it on vista, where the user does not have write access to the backup folder by default. So there are many solutions to solve this, but none of them seemed to be really nice. One solution would be to find another folder for the temporary file. Both the sql server user as well as the administrator performing the backup need read and write permissions. Is there such a directory? Any other ideas? Thanks a lot.

    Read the article

  • Jquery - binding click event to a variable

    - by Kayote
    All, I am really stuck/ confused at this point. I have an array with 6 items in it. Each item in the array is dynamically filled with elements using jquery '.html' method. However, I cannot seem to be able to attach/ bind an event to this dynamically created variable. As soon as the browser gets to the problem line (see the area labeled 'PROBLEM AREA'), I get a 'undefined' error, which is really confusing as all the previous code on the very same variable works just fine. var eCreditSystem = document.getElementById("creditSystem"); var i = 0; var eCreditT = new Array(6); // 6 members created which will be recycled function createCreditTransaction () // func called when a transaction occurs, at the mo, attached to onclick() { if (i < 6) { eCreditT[i] = undefined; // to delete the existing data in the index of array addElements (i); } else if (i > 5 || eCreditT[i] != undefined) { ... } } function addElements (arrayIndex) // func called from within the 'createCreditTransaction()' func { eCreditT[i] = $(document.createElement('div')).addClass("cCreditTransaction").appendTo(eCreditSystem); $(eCreditT[i]).attr ('id', ('trans' + i)); $(eCreditT[i]).html ('<div class="cCreditContainer"><span class="cCreditsNo">-50</span>&nbsp;<img class="cCurrency" src="" alt="" /></div><span class="cCloseMsg">Click box to close.</span><div class="dots"></div><div class="dots"></div><div class="dots"></div>'); creditTransactionSlideOut (eCreditT[i], 666); // calling slideOut animation console.log(eCreditT[i]); // this confirms that the variable is not undefined /* ***** THE PROBLEM AREA ***** */ $(eCreditT[i]).on ('click', function () // if user clicks on the transaction box { creditTransactionSlideBackIn (eCreditT[i], 150); // slide back in animation }); return i++; }

    Read the article

  • Creating self-referential tables with polymorphism in SQLALchemy

    - by Jace
    I'm trying to create a db structure in which I have many types of content entities, of which one, a Comment, can be attached to any other. Consider the following: from datetime import datetime from sqlalchemy import create_engine from sqlalchemy import Column, ForeignKey from sqlalchemy import Unicode, Integer, DateTime from sqlalchemy.orm import relation, backref from sqlalchemy.ext.declarative import declarative_base Base = declarative_base() class Entity(Base): __tablename__ = 'entities' id = Column(Integer, primary_key=True) created_at = Column(DateTime, default=datetime.utcnow, nullable=False) edited_at = Column(DateTime, default=datetime.utcnow, onupdate=datetime.utcnow, nullable=False) type = Column(Unicode(20), nullable=False) __mapper_args__ = {'polymorphic_on': type} # <...insert some models based on Entity...> class Comment(Entity): __tablename__ = 'comments' __mapper_args__ = {'polymorphic_identity': u'comment'} id = Column(None, ForeignKey('entities.id'), primary_key=True) _idref = relation(Entity, foreign_keys=id, primaryjoin=id == Entity.id) attached_to_id = Column(Integer, ForeignKey('entities.id'), nullable=False) #attached_to = relation(Entity, remote_side=[Entity.id]) attached_to = relation(Entity, foreign_keys=attached_to_id, primaryjoin=attached_to_id == Entity.id, backref=backref('comments', cascade="all, delete-orphan")) text = Column(Unicode(255), nullable=False) engine = create_engine('sqlite://', echo=True) Base.metadata.bind = engine Base.metadata.create_all(engine) This seems about right, except SQLAlchemy doesn't like having two foreign keys pointing to the same parent. It says ArgumentError: Can't determine join between 'entities' and 'comments'; tables have more than one foreign key constraint relationship between them. Please specify the 'onclause' of this join explicitly. How do I specify onclause?

    Read the article

  • Bidirectional replication update record problem

    - by Mirek
    Hi, I would like to present you my problem related to SQL Server 2005 bidirectional replication. What do I need? My teamleader wants to solve one of our problems using bidirectional replication between two databases, each used by different application. One application creates records in table A, changes should replicate to second database into a copy of table A. When data on second server are changed, then those changes have to be propagated back to the first server. I am trying to achieve bidirectional transactional replication between two databases on one server, which is running SQL Server 2005. I have manage to set this up using scripts, established 2 publications and 2 read only subscriptions with loopback detection. Distributtion database is created, publishment on both databases is enabled. Distributor and publisher are up. We are using some rules to control, which records will be replicated, so we need to call our custom stored procedures during replication. So, articles are set to use update, insert and delete custom stored procedures. So far so good, but? Everything works fine, changes are replicating, until updates are done on both tables simultaneously or before changes are replicated (and that takes about 3-6 seconds). Both records then end up with different values. UPDATE db1.dbo.TestTable SET Col = 4 WHERE ID = 1 UPDATE db2.dbo.TestTable SET Col = 5 WHERE ID = 1 results to: db1.dbo.TestTable COL = 5 db2.dbo.TestTable COL = 4 But we want to have last change winning replication. Please, is there a way to solve my problem? How can I ensure same values in both records? Or is there easier solution than this kind of replication? I can provide sample replication script which I am using. I am looking forward for you ideas, Mirek

    Read the article

  • C# Windows Service doesn't seem to like privatePath

    - by SauerC
    I wrote a C# Windows Service to handle task scheduling for our application. I'm trying to move the "business rules" assemblies into a bin subdirectory of the scheduling application to make it easier for us to do updates (stop service, delete all files in bin folder, replace with new ones, start service). I added <runtime> <assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1"> <probing privatePath="bin;"/> </assemblyBinding> </runtime> to the service's app config and it works fine if the service is run as a console application. Problem is when the service is run as a windows service it doesn't work. It appears that when windows runs the service the app config file gets read properly but then the service is executed as if it was in c:\windows\system32 and not the actually EXE location and that gums up the works. We've got a lot of assemblies so I really don't want to use the GAC or <codeBase>. Is it possible to have the EXE change it's base directory back to where it should be when it's run as a service?

    Read the article

  • c#: how do i search for a string with quotes

    - by every_answer_gets_a_point
    i am searching for this string: <!--m--><li class="g w0"><h3 class=r> within the html source of this link: "http://www.google.com/search?sourceid=chrome&ie=UTF-8&q=Santarus Inc?" this is how i am searching for it: d=html.IndexOf(@"<!--m--><li class=""g w0""><h3 class=r><a href=""",1); for some reason it is finding an occurence of it that is incorrect. (it says that it is at position 45 (another words d=45) but this incorrect here are the first couple hundred characters of the string html: <!doctype html><head><title>Santarus Inc&#8206; - Google Search</title><script>window.google={kEI:\"b6jES5nPD4rysQOokrGDDQ\",kEXPI:\"23729,24229,24249,24260,24414,24457\",kCSI:{e:\"23729,24229,24249,24260,24414,24457\",ei:\"b6jES5nPD4rysQOokrGDDQ\",expi:\"23729,24229,24249,24260,24414,24457\"},ml:function(){},kHL:\"en\",time:function(){return(new Date).getTime()},log:function(b,d,c){var a=new Image,e=google,g=e.lc,f=e.li;a.onerror=(a.onload=(a.onabort=function(){delete g[f]}));g[f]=a;c=c||\"/gen_204?atyp=i&ct=\"+b+\"&cad=\"+d+\"&zx=\"+google.time();a.src=c;e.li=f+1},lc:[],li:0,Toolbelt:{}};\nwindow.google.sn=\"web\";window.google.timers={load:{t:{start:(new Date).getTime()}}};try{}catch(u){}window.google.jsrt_kill=1;\n</script><style>body{background:#fff;color:#000;margin:3px 8px}#gbar,#guser{font-size:13px;padding-top:1px !important}#gbar{float:left;height:22px}#guser{padding-bottom:7px !important;text-align:right}.gbh,.gbd{border-top:1px solid #c9d7f1;font-size:1px}.gbh

    Read the article

  • How to send arbitrary ftp commands in C#

    - by cchampion
    I have implemented the ability to upload, download, delete, etc. using the FtpWebRequest class in C#. That is fairly straight forward. What I need to do now is support sending arbitrary ftp commands such as quote SITE LRECL=132 RECFM=FB or quote SYST Here's an example configuration straight from our app.config: <!-- The following commands will be executed before any uploads occur --> <extraCommands> <command>quote SITE LRECL=132 RECFM=FB</command> </extraCommands> I'm still researching how to do this using FtpWebRequest. I'll probably try WebClient class next. Anyone can point me in the right direction quicker? Thanks! UPDATE: I've come to that same conclusion, as of .NET Framework 3.5 FtpWebRequest doesn't support anything except what's in WebRequestMethods.Ftp.*. I'll try a third party app recommended by some of the other posts. Thanks for the help!

    Read the article

  • Why is it not good to use $_SESSION in Restful Implementations?

    - by keisimone
    Original Question: i read that for RESTful websites. it is not good to use $_SESSION. Why is it not good? how then do i properly authenticate users without looking up database all the time to check for the user's roles? I read that it is not good to use $_SESSION. http://www.recessframework.org/page/towards-restful-php-5-basic-tips I am creating a WEBSITE, not web service in PHP. and i am trying to make it more RESTful. at least in spirit. right now i am rewriting all the action to use Form tags POST and add in a hidden value called _method which would be "delete" for deleting action and "put" for updating action. however, i am not sure why it is recommended NOT to use $_SESSION. i would like to know why and what can i do to improve. To allow easy authorization checking, what i did was to after logging in the user, the username is stored in the $_SESSION. Everytime the user navigates to a page, the page would check if the username is stored inside $_SESSION and then based on the $_SESSION retrieves all the info including privileges from the database and then evaluates the authorization to access the page based on the info retrieved. Is the way I am implementing bad? not RESTful? how do i improve performance and security? Thank you.

    Read the article

  • server/ client server connection

    - by user312054
    I have a server side program that creates a listening server side socket. The problem occurring is that it seems as if the client side sends a connect request it gets rejected if the server side socket is listening but connects if the server side program is not running. I can see the server side program getting the client request when debugging. It seems as if the client cannot connect to a listening socket. Any suggestions on a resolution? The server side accept code snippet is this. void CSocketListen::OnAccept(int nErrorCode) { CSocket::OnAccept(nErrorCode); CSocketServer* SocketPtr = new CSocketServer(); if (Accept(*SocketPtr)) { // add to list of client sockets connected } else { delete SocketPtr; } The client side code connect is like this. SOCKET cellModem; sockaddr_in handHeld; handHeld.sin_family = AF_INET; //Address family handHeld.sin_addr.s_addr = inet_addr("127.0.0.1"); handHeld.sin_port = htons((u_short)1113); //port to use cellModem=socket(AF_INET,SOCK_STREAM,0); if(cellModem == INVALID_SOCKET) { // log socket failure return false; } else { // log socket success } if (connect(cellModem,(const struct sockaddr*)&handHeld, sizeof(handHeld)) != 0 ) { // log socket connection success } else { // log socket connection failure closesocket(cellModem); }

    Read the article

  • Visual Studio 2010 / ASP.NET MVC 2 / Publish Error

    - by SevenCentral
    I just did a clean install on Windows 7 x64 Professional with the final release of Visual Studio 2010 Premium. In order to duplicate what I'm experiencing do the following in: Create a new ASP.NET MVC 2 Web Application Right click the project and select Properties On the Web tab, select "Use Local IIS Web Server" Click on Create Virtual Directory Save all Unload the project Edit the project file Change MvcBuildViews to true Save all Reload project Right click the project and select Publish Choose the file system publish method Enter a target location Choose Delete all existing files Select Publish Right click the project Select Publish Each time I do the above I get the following errror: "It is an error to use a section registered as allowDefinition='MachineToApplication' beyond application level..." The error originates from obj\debug\package\packagetmp\web.config, relative to the project directory. I can repeat this all day long with any MVC 2 project I've built. In order to fix this problem, I need to set MvcBuildViews to false in the project file. That's not really an option. This wasn't a problem in Visual Studio 2008 and it seems to be an issue with the way the Publish command stages files beneath the project directory. Can anyone else duplicate this error? Is this a bug or by design? Is there a fix, workaround, etc...? Thanks.

    Read the article

  • Using Facebook Login to create a user?

    - by andbeyond
    I've read this SO post which led me to this FB policy page, which seemed to include some pertinent information, but I'd like more of a community response, maybe some experienced FB API people who know the limits. My question is if I can use Facebook's Login api to, essentially, create a new user on my website. I really would just like to allow users to easily "transfer" some data from FB in order to more easily create a new account on my site. I realize, first and foremost, that I would obviously announce to the user that by click "submit" in the form, that they are creating a separate account on my site. Pertinent blocks on the policy page state: You may cache data you receive through use of the Facebook API in order to improve your application’s user experience, but you should try to keep the data up to date. This permission does not give you any rights to such data. Which doesn't look good for me, but also this: If you stop using Platform or we disable your application, you must delete all data you have received through use of the Facebook API unless: (a) it is basic account information; or (b) you have received explicit consent from the user to retain their data. Which, in my case, I would be satisfying part B. I would be asking the user's permission to retain the data, as I am simply using Facebook as a conveience to the user when creating an account. I also realize that Facebook has a registration API, but this would require a Facebook styled login form, along with my own sites login form, and I'd rather one interface, as this makes it easier for me on the front and back end. Any thoughts?

    Read the article

  • eclipse django using wrong settings.py in pythonpath

    - by user1290264
    I have pydev/django installed in eclipse, and it runs fine. However, after adding a second django project to eclipse and running the server ('http://127.0.0.1:8000') the pythonpath seems to be stuck on project2 even when I run project1. As a summary, I have two django projects: project1, project2. When I run the django server for project1 I get: Validating models... 0 errors found Django version 1.5, using settings 'project1.settings' Development server is running at 'http://127.0.0.1:8000/' Quit the server with CTRL-BREAK. The above seems to suggest that django is using the correct settings file; however, when I go to 'http://127.0.0.1:8000/' it displays the urls from project2. Also, if I go to 'http://127.0.0.1:8000/admin' the models are getting pulled from the sqlite.db file in project2 as well. I've even tried removing project2 from eclipse entirely and now at 'http://127.0.0.1:8000/admin' I get this error: Python Path: ['C:\Users\Brad\workspaces\In Progress\project2', 'C:\Users\Brad\workspaces\In Progress\project2', 'C:\Python27\DLLs', 'C:\Python27\lib', 'C:\Python27\lib\plat-win', 'C:\Python27\lib\lib-tk', 'C:\Python27', 'C:\Python27\lib\site-packages', 'C:\Windows\system32\python27.zip'] If I run the server on a different port with project1 the path seems to be fine: runserver 7000 --noreload Then 'http://127.0.0.1:7000/' uses project1's paths, but it doesn't seem like I should have to do this. Note: I have setup the run configurations as correctly as I know how. In the main tab, the project and main module both point to the correct project (project1), and the "PYTHONPATH that will be used in the run:" includes project1. Also, I have cleared my browser history, cookies, and everything that chrome would let me delete.

    Read the article

  • ASP.NET:Paging when deleting rows

    - by Niels Bosma
    I have a datagrid where a row can be deleted (using ajax). I'm have problem with the pager in the following scenario: Lets say my PageSize is 10, I have 101 rows so that's 11 pages with the last page with an single element. Let no assume that I'm on page 10 (PageIndex=9) and delete a row. Then I go to the 11'th page (who's now empty and doesn't really exist). ASP now shows me the EmptyDataTemplate and no pager so I can't go back. My approach (which isn't working) is to detect this scenario and step one page back: public void Bind() { gridMain.DataBind(); } public void SetPage(int page) { gridMain.PageIndex = page; gridMain.DataBind(); } protected void ldsGridMain_Selecting(object sender, LinqDataSourceSelectEventArgs e) { selectArgs = e; e.Result = (new EnquiryListController()).GetEnquiryList(OnBind(this), supplier); } protected void ldsGridMain_Selected(object sender, LinqDataSourceStatusEventArgs e) { totalRows = selectArgs.Arguments.TotalRowCount; //Detect if we need to update the page: if (gridMain.PageIndex > 0 && (gridMain.PageSize * gridMain.PageIndex + 1) > totalRows) SetPage(gridMain.PageIndex - 1); } protected void gridMain_PageIndexChanging(object sender, GridViewPageEventArgs e) { SetPage(e.NewPageIndex); } I can see that SetPage is called with the the right page index, but the databind doesn't seem to called as I still get the EmptyDataTemplate.

    Read the article

  • Compile subversion on CentOs

    - by peter
    I have downloaded, compiled and installed so far: apr-1.3.9 apr-util-1.3.9 sqlite-3.6.23 zlib-1.2.4 libtool-2.2.6b Now after downloading subversion-1.6.9, the config works fine but compiling it will end with the following error: cd subversion/svn && /bin/sh /root/subversion-1.6.9/libtool --tag=CC --silent --mode=link gcc -g -O2 -g -O2 -pthread -rpath /usr/local/lib -o svn add-cmd.o blame-cmd.o cat-cmd.o changelist-cmd.o checkout-cmd.o cleanup-cmd.o commit-cmd.o conflict-callbacks.o copy-cmd.o delete-cmd.o diff-cmd.o export-cmd.o help-cmd.o import-cmd.o info-cmd.o list-cmd.o lock-cmd.o log-cmd.o main.o merge-cmd.o mergeinfo-cmd.o mkdir-cmd.o move-cmd.o notify.o propdel-cmd.o propedit-cmd.o propget-cmd.o proplist-cmd.o props.o propset-cmd.o resolve-cmd.o resolved-cmd.o revert-cmd.o status-cmd.o status.o switch-cmd.o tree-conflicts.o unlock-cmd.o update-cmd.o util.o ../../subversion/libsvn_client/libsvn_client-1.la ../../subversion/libsvn_wc/libsvn_wc-1.la ../../subversion/libsvn_ra/libsvn_ra-1.la ../../subversion/libsvn_delta/libsvn_delta-1.la ../../subversion/libsvn_diff/libsvn_diff-1.la ../../subversion/libsvn_subr/libsvn_subr-1.la /usr/local/apr/lib/libaprutil-1.la -lexpat /usr/local/apr/lib/libapr-1.la -lrt -lcrypt -lpthread -ldl /usr/bin/ld: cannot find -lexpat collect2: ld returned 1 exit status make: * [subversion/svn/svn] Error 1 The file at /usr/local/apr/lib/libapr-1.la exists and seems to be OK (from permission perspective What could be the problem here? Thanks Peter

    Read the article

  • Cucumber-rails on jruby installs gem into my apps root directory with bundler

    - by brad
    Just installed cucumber 0.7.2 and cucumber-rails 0.3.1 with jruby-1.4.0 on OSX. When I run a bundle install, it places a cucumber-rails directory in my main app with all of the gem code/dependencies. First off, this is definitely not what I want and I'm not sure why this happens for cucumber-rails only. Second, if I delete this folder and just manually install cucumber-rails, when I run script/generate feature blah I get /Users/bradrobertson/.rvm/rubies/jruby-1.4.0/lib/ruby/site_ruby/1.8/rubygems/source_index.rb:344:in `refresh!': source index not created from disk (RuntimeError) from /Users/bradrobertson/.rvm/gems/jruby-1.4.0/gems/rails-2.3.5/lib/rails/vendor_gem_source_index.rb:34:in `refresh!' from /Users/bradrobertson/.rvm/gems/jruby-1.4.0/gems/rails-2.3.5/lib/rails/vendor_gem_source_index.rb:29:in `initialize' from /Users/bradrobertson/.rvm/gems/jruby-1.4.0/gems/rails-2.3.5/lib/rails/gem_dependency.rb:21:in `new' from /Users/bradrobertson/.rvm/gems/jruby-1.4.0/gems/rails-2.3.5/lib/rails/gem_dependency.rb:21:in `add_frozen_gem_path' from /Users/bradrobertson/.rvm/gems/jruby-1.4.0/gems/rails-2.3.5/lib/initializer.rb:298:in `add_gem_load_paths' from /Users/bradrobertson/.rvm/gems/jruby-1.4.0/gems/rails-2.3.5/lib/initializer.rb:132:in `process' from /Users/bradrobertson/.rvm/gems/jruby-1.4.0/gems/rails-2.3.5/lib/initializer.rb:113:in `run' from /Users/bradrobertson/Repos/app/source/trunk/config/environment.rb:13 from /Users/bradrobertson/Repos/app/source/trunk/config/environment.rb:1:in `require' from /Users/bradrobertson/.rvm/gems/jruby-1.4.0/gems/rails-2.3.5/lib/commands/generate.rb:1 from /Users/bradrobertson/.rvm/gems/jruby-1.4.0/gems/rails-2.3.5/lib/commands/generate.rb:3:in `require' from script/generate:3 Similarly running rake cucumber I get rake aborted! source index not created from disk So something obviously doesn't work. If I add that cucumber-rails directory back in, then my rake cucumber actually runs. Can someone tell me why it would need to install the gem right in my rails app? I've never seen this before. setup jruby-1.4.0 cucumber-0.7.2 cucumber-rails 0.3.1 bundler 0.9.23 webrat 0.7.1

    Read the article

  • Strange unset cookie problem

    - by neobie
    Hi there, I have a strange problem to clear Cookie via PHP. Lets say if I have a domain neobie.net I store "remember user login" cookie name as "USER_INFO" which contains string to identify user login in the next time of revisit. now using firefox, I saw that I have 2 cookies USER_INFO with domain "www.neobie.net" and ".neobie.net" with expiration date of 1 week later. I wrote a logout.php script, which clear the cookie of different domain (.neobie.net, www.neobie.net, neobie.net) to ensure that USER_INFO cookie is completely cleared for different domain. Now is the problem. The user isn't able to clear the cookie when user visit logout.php I found out that, I have to manually delete the cookie with domain "www.neobie.net", leaving the ".neobie.net " intact, then only the cookie can be cleared. So, I have to make the php script to setcookie USER_INFO on ".neobie.net", and prevent it to set cookie on "www.neobie.net" to make the logout.php script work. But I don't understand why I couldn't clear the cookie for "www.neobie.net" (with leading www. , tested on firefox and chrome)

    Read the article

  • Javascript force GC collection? / Forcefully free object?

    - by plash
    I have a js function for playing any given sound using the Audio interface (creating a new instance for every call). This works quite well, until about the 32nd call (sometimes less). This issue is directly related to the release of the Audio instance. I know this because I've allowed time for the GC in Chromium to run and it will allow me to play another 32 or so sounds again. Here's an example of what I'm doing: <html><head> <script language="javascript"> function playSound(url) { snd = new Audio(url); snd.play(); delete snd; snd = null; } </script> </head> <body> <a href="#" onclick="playSound('blah.mp3');">Play sound</a> </body></html> I also have this, which works well for pages that have less than 32 playSound calls: var AudioPlayer = { cache: {}, play: function(url) { if (!AudioPlayer.cache[url]) AudioPlayer.cache[url] = new Audio(url); AudioPlayer.cache[url].play(); } }; But this will not work for what I want to do (dynamically replace a div with other content (from separate files), which have even more sounds on them - 1. memory usage would easily skyrocket, 2. many sounds will never play). I need a way to release the sound immediately. Is it possible to do this? I have found no free/close/unload method for the Audio interface. The pages will be viewed locally, so the constant loading of sounds is not a big factor at all (and most sounds are rather short).

    Read the article

  • Linked Measure Groups and Local Dimensions

    - by ekoner
    Mulling over something I've been reading up on. According to Chris Webb, A linked measure group can only be used with dimensions from the same database as the source measure group. So I took this to mean as long as two cubes share a database, a linked measure group can be used with a dimension. So I created a new cube and added a local measure group, a local dimension and a linked measure group. However, I can't create a relationship between the linked measure group and the local dimension even though they are within the same database. I get the message below: Regular relationships in the current database between non-linked (local) dimensions and linked measure groups cannot be edited. These relationship can only be created through the wizard. This dialog can be used to delete these relationships. I see that I can go to the original cube and add the dimension there, but does the message below mean I have an alternative? I just know it's going to be something simple and trivial! Thanks for reading.

    Read the article

< Previous Page | 427 428 429 430 431 432 433 434 435 436 437 438  | Next Page >