Search Results

Search found 15132 results on 606 pages for 'svn tools'.

Page 530/606 | < Previous Page | 526 527 528 529 530 531 532 533 534 535 536 537  | Next Page >

  • How to compile a C DLL for 64 bit with Visual Studio 2010?

    - by Daren Thomas
    I have a DLL written in C in source code. This is the code for the General Polygon Clipper (in case you are interested). I'm using it in a C# project via the C# wrapper provided on the homepage. This comes with a precompiled DLL. Since switching to a 64bit Development machine with Visual Studio 2010 and Windows 7 64 bit, the application won't run anymore. This is the error I get: An attempt was made to load a program with an incorrect format. This is because of DLLImporting the 32bit gpc.dll, as I have gathered from stuff found on the web. I assume this will all go away if I recompile the DLL to 64bit, but can't for the love of me figure out how to do so. My C skills are basic, in that I can write a C program with the GNU tools, but have no experience with various compilers / processors / IDEs etc. I believe I could port this to C#. By that I mean I trust myself to actually pull it off. But I'd prefer not to, since it is a lot of work that I'd prefer a compiler to do for me ;)

    Read the article

  • Suggestions on how build an HTML Diff tool?

    - by Danimal
    In this post I asked if there were any tools that compare the structure (not actual content) of 2 HTML pages. I ask because I receive HTML templates from our designers, and frequently miss minor formatting changes in my implementation. I then waste a few hours of designer time sifting through my pages to find my mistakes. The thread offered some good suggestions, but there was nothing that fit the bill. "Fine, then", thought I, "I'll just crank one out myself. I'm a halfway-decent developer, right?". Well, once I started to think about it, I couldn't quite figure out how to go about it. I can crank out a data-driven website easily enough, or do a CMS implementation, or throw documents in and out of BizTalk all day. Can't begin to figure out how to compare HTML docs. Well, sure, I have to read the DOM, and iterate through the nodes. I have to map the structure to some data structure (how??), and then compare them (how??). It's a development task like none I've ever attempted. So now that I've identified a weakness in my knowledge, I'm even more challenged to figure this out. Any suggestions on how to get started? clarification: the actual content isn't what I want to compare -- the creative guys fill their pages with lorem ipsum, and I use real content. Instead, I want to compare structure: <div class="foo">lorem ipsum<div> is different that <div class="foo"><p>lorem ipsum<p><div>

    Read the article

  • What are best practices for managing related Cabal packages?

    - by Norman Ramsey
    I'm working on a dataflow-based optimization library written in Haskell. It now seems likely that the library is going to have to be split into two pieces: A core piece with minimal build dependencies; call it hoopl-core. A full piece, call it hoopl, which may have extra dependencies on packages like a prettyprinter, QuickCheck, and so on. The idea is that the Glasgow Haskell Compiler will depend only on hoopl-core, so that it won't be too difficult to bootstrap the compiler. Other compilers will get the extra goodies in hoopl. Package hoopl will depend on hoopl-core. The Debian package tools can build multiple packages from a single source tree. Unfortunately Cabal has not yet reached that level of sophistication. But there must be other library or application designers out there who have similar issues (e.g., one package for a core library, another for a command-line interface, another for a GUI interface). What are current best practices for building and managing multiple related Haskell packages using Cabal?

    Read the article

  • Determining failing sectors on portable flash memory

    - by Faxwell Mingleton
    I'm trying to write a program that will detect signs of failure for portable flash memory devices (thumb drives, etc). I have seen tools in the past that are able to detect failing sectors and other kinds of trouble on conventional mechanical hard drives, but I fear that flash memory does not have the same kind of predictable low-level access to the hardware due to the internal workings of the storage. Things like wear-leveling and other block-remapping techniques (to skip over 'dead' sectors?) lead me to believe that determining if a flash drive is failing will be difficult at best, if not impossible (short of having constant read failures and device unmounts). Flash drives at their end-of-life should be easy to detect (constant CRC discrepancies during reads and all-out failure). But what about drives that might be failing early? Are there any tell-tale signs like slower throughput speeds that might indicate a flash drive is going to fail much sooner than normal? Along the lines of detecting potentially bad blocks, I had considered attempting random reads/writes to a file close to or exactly the size of the entire volume, but even then is it possible that the drive might report sizes under its maximum capacity to account for 'dead' blocks? In short, is there any way to circumvent or at least detect (algorithmically or otherwise) the use of block-remapping or other life extension techniques for flash memory? Let me end this question by expressing my uncertainty as to whether or not this belongs on serverfault.com . This is definitely a hardware-related question, but I also desire a software solution - preferably one that I can program myself. If this question is misplaced, I will be happy to migrate it to serverfault - but I do need a programming solution. Please let me know if you need clarification :) Thanks!

    Read the article

  • How does VS 2005 provide history across all TFS Team Projects when tf.exe cannot?

    - by AakashM
    In Visual Studio 2005, in the TFS Source Control Explorer, these is a top-level node for the TFS Server itself, with a child node for each Team Project. Right-clicking either the server node or the node for a Team Project gives a context menu on which there is a View History item. Selecting this gives you a History window showing the last 200 or so changesets, either for the specific Team Project chosen, or across all Team Projects. It is this history across all Team Projects that I am wondering about. The command-line tf.exe history command provides (as I understand it) basically the same functionality as is provided by the VS TFS Source Control plug-in. But I cannot work out how to get tf.exe history to provide this across-all-Team-Projects history. At a command line, supposing I have C:\ mapped as the root of my workspace, and Foo, Bar, and Baz as Team Projects, I can do C:\> tf history Foo /recursive /stopafter:200 to get the last 200 changesets that affected Team Project Foo; or from within a Team Project folder C:\Bar> tf history *.* /recursive /stopafter:200 which does the same thing for Team Project Bar - note that the wildcard *.* is allowed here. However, none of these work (each gives the error message shown): C:\> tf history /recursive /stopafter:200 The history command takes exactly one item C:\> tf history *.* /recursive /stopafter:200 Unable to determine the source control server C:\> tf history *.* /server:servername /recursive /stopafter:200 Unable to determine the workspace I don't see an option in the docs for tf for specifying a workspace; it seems to only want to determine it from the current folder. So what is VS 2005 doing? Is it internally doing a history on each Team Project in turn and then sticking the results together?? note also that I have tried with Power Tools; tfpt history from the command line gives exactly the same error messages seen here

    Read the article

  • Use a resource dictionary as theme in Silverlight

    - by SaphuA
    Hello, I have developed an application which allows the user to switch between themes. I'm doing this by including the xaml file as a resource in my project and using the following code: MainTheme.ThemeUri = new Uri("SilverlightApplication1;component/Themes/[ThemeName]/Theme.xaml", UriKind.Relative); This worked well, untill I found these themes: http://timheuer.com/blog/archive/2010/05/17/silverlight-4-tools-released-and-new-application-templates.aspx The difference is that these themes consist of multiple files. So I made a Theme.xaml file that only includes MergedDictionaries so I could still use the code above. This is the Theme.xaml file for the Cosmopolitan theme. <ResourceDictionary xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"> <ResourceDictionary.MergedDictionaries> <ResourceDictionary Source="CoreStyles.xaml"/> <ResourceDictionary Source="SDKStyles.xaml"/> <ResourceDictionary Source="Styles.xaml"/> <ResourceDictionary Source="ToolkitStyles.xaml"/> </ResourceDictionary.MergedDictionaries> </ResourceDictionary> However, when I run the c# code above I get the following exception: System.Windows.Markup.XamlParseException: Failed to assign to property 'System.Windows.ResourceDictionary.Source'. Just to be clear, using the MergedDictionaries method does work when I set it in my App.xaml: <Application.Resources> <ResourceDictionary> <ResourceDictionary.MergedDictionaries> <ResourceDictionary Source="Themes/Cosmopolitan/Theme.xaml"/> </ResourceDictionary.MergedDictionaries> </ResourceDictionary> </Application.Resources> What am I doing wrong? Thanks!

    Read the article

  • C compiler cannot create executables when trying to build Binutils

    - by Koning Baard XIV
    I am trying to build Linux From Scratch, and now I am at chapter 5.4, which tells me how to build Binutils. I have binutils 2.20's source code, but when I try to build it: time { ./binutils-2.20/configure --target=$LFS_TGT --prefix=/tools --disable-nls --disable-werror ; } it gives me an error: checking build system type... i686-pc-linux-gnu checking host system type... i686-pc-linux-gnu checking target system type... i686-lfs-linux-gnu checking for a BSD-compatible install... /usr/bin/install -c checking whether ln works... yes checking whether ln -s works... yes checking for a sed that does not truncate output... /bin/sed checking for gawk... gawk checking for gcc... GCC checking for C compiler default output file name... configure: error: in `/media/LFS': configure: error: C compiler cannot create executables See `config.log' for more details. You can see my config.log at pastebin.com: http://pastebin.com/hX7v5KLn I have just installed Ubuntu 10.04, and reinstalled GCC and installed G++. Also, the build is done by a non-root, non-admin user called 'lfs' (which is also described in Linux From Scratch), and on a different partition than where the system is installed. Can anyone help me? Thanks

    Read the article

  • Switching from Java to .NET from a career change point of view

    - by Joe
    Could anyone share with me their experience with switching from Java to .NET from a career point of view? I've been a Java developer for 12 years and am just getting tired of how fragmented the Java world has become. For my liking, there's just too many frameworks, tools, application servers, etc.. And it seems each new tool just adds complexity and time to even the simplest of projects. I'm not trying to start any wars - I'm just giving you the reason I ask the main question. I've read a few books on .NET and have done one WebForms job. I love the integrated environment and would like to hear how others transitioned from Java to .NET. What I mean by that is did you do it somehow as a contractor or did you join a company as a beginner .NET developer with much Java experience? Personally, I'm ready to take the leap if I can figure out how to not lose too much income in the process (Senior Java developer to beginner .NET developer). I would really appreciate hearing your stories.

    Read the article

  • Create CAB file for ActiveX installation for IE

    - by vikasde
    I created a cab file that contains my activex using CABARC.exe. I also created an .inf file. My inf file looks like this: [version] signature="$CHICAGO$" AdvancedINF=2.0 [Add.Code] MySetup.exe=MySetup.exe [MySetup.exe] file-win32-x86=thiscab clsid={49892510-B520-4b35-8ADF-57084DD2F717} My html looks like this: <object name="secondobj" style='display:none' id='TestActivex' classid='CLSID:49892510-B520-4b35-8ADF-57084DD2F717' codebase='http://myurl/MySetup.cab#version=1,0,0,0'></object> I created the CABARC using the following commmand: C:\tools\Cab\BIN>CABARC.EXE N MySetup.cab MySetup.msi setup.inf I also added http://myurl to the trusted sites. Now the first time I opened the html page in IE, I saw a yellow bar, which I accepted. However it never installed the activex control. I dont see the installation in my program files nor can I see anything in the event logs or in the temporary download folder or in the "manage add-ons". Now everytime I open the webpage in IE, I do not see the yellow bar anymore. Can anybody help me out here please?

    Read the article

  • Which parts of Sharepoint do I need to understand to build a publicly facing website?

    - by Petras
    I am building a publicly facing website that does the following. Users log in. And then view a list of their customers. They click on a customer to view their past purchases, order them, change them etc. This is not a shopping site by the way. It is a simple look up tool. Note that none of the data accessed by the website is in anything other than a SQL database - no office documents. Also, the login does not use users Windows credentials on a VPN or something like that. Typically I would build this using a standard ASP.NET MVC website. However the client says they want to use Sharepoint. As I understand it, Sharepoint is used for workflow and websites that are collaboration tools such as the components you can see here http://www.sharepointhosting.com/sharepoint-features.html Here are my questions: Would I be right in saying that WSS is completely inappropriate for this task as it comes with an overhead that provides no benefits? If I had to use it, would I need WSS or MOSS? If I had to use it, would I be right in saying the site would consist of : List item a) Web Parts b) And a custom site layout. How do I create one of these?

    Read the article

  • Constructing human readable sentences based on a survey

    - by Joshua
    The following is a survey given to course attendees to assess an instructor at the end of the course. Communication Skills 1. The instructor communicated course material clearly and accurately. Yes No 2. The instructor explained course objectives and learning outcomes. Yes No 3. In the event of not understanding course materials the instructor was available outside of class. Yes No 4. Was instructor feedback and grading process clear and helpful? Yes No 5. Do you feel that your oral and written skills have improved while in this course? Yes No We would like to summarize each attendees selection based on the choices chosen by him. If the provided answers were [No, No, Yes, Yes, Yes]. Then we would summarize this as "The instructor was not able to summarize course objectives and learning outcomes clearly, but was available for usually helpful outside of class. The instructor feedback and grading process was clear and helpful and I feel that my oral and written skills have improved because of this course. Based on the selections chosen by the attendee the summary would be quite different. This leads to many answers based on the choices selected and the number of such questions in the survey. The questions are usually provided by the training organization. How do you come up with a generic solution so that this can be effectively translated into a human readable form. I am looking for tools or libraries (java based), suggestions which will help me create such human readable output. I would like to hide the complexity from the end users as much as possible.

    Read the article

  • Strange error when filling a data adapter.

    - by Tim C
    I am receiving the following error in my code (c#, .Net 3.5, VS2008) when I try to connect to an Excel sheet and fill a OleDbDataAdapter with the results of a query. First the error: Attempted to read or write protected memory. This is often an indication that other memory is corrupt. And here is the code, which is honestly pretty simple: var excelFileName = string.Format("c:/Metadata_Tool.xlsm"); var connectionString = string.Format("Provider=Microsoft.ACE.OLEDB.12.0; Data Source={0}; Extended Properties=Excel 12.0;HDR=YES;", excelFileName); var adapter = new OleDbDataAdapter("Select * FROM [Video Tagging XML]", connectionString); var ds = new DataSet(); adapter.Fill(ds, "VTX"); DataTable data = ds.Tables["VTX"]; foreach (DataRow myRow in data.Rows) { foreach (DataColumn myColumn in data.Columns) { Console.Write("\t{0}", myRow[myColumn]); } Console.WriteLine(); } Console.ReadLine(); I get the error on the line adapter.Fill(ds,"VTX");. I did find a microsoft forum post saying to turn on JIT optimization in VS2008 from the Tools/Options/Debug/General menu, but this did not seem to help. Any help would be greatly appreciated thanks!

    Read the article

  • Change Data Capture or Change Tracking - Same as Traditional Audit Trail Table?

    - by HardCode
    Before I delve into the abyss of Microsoft documentation any deeper, I'd like to know if someone experienced with Change Data Capture and Change Tracking know if one or both of these can be used to replace the traditional ... "Audit trail table copy of the 'real table' (all of the fields of the original table, plus date/time, user ID, and DML action field) inserted into by Triggers" ... setup for a database table audit trail, where the trigger populates the audit trail table (which is all manual work). The MSDN overview documentation explains at a high level what Change Data Capture and Change Tracking are, but it isn't clear enough to me, and doesn't state outright, that these tools can be used to replace the traditional audit trail tables we've made so often. Can someone with any experience using Change Data Capture and Change Tracking save me a lot of time, or confirm that I am spending time looking at the right tool? The critical part of our audit trail is capturing all changes to a table's fields (on INSERT, UPDATE, DELETE), when it happened, and who did it. These changes are commonly provided to an end user chronologically via an audit trail report. Which is another question ... Change Data Capture or Change Tracking is the solution, I'd assume that this data can be queried just like data from a normal table? EDIT: I need a permanent audit trail, irregardless of time. I see that Change Data Capture has to do with the transaction logs, so this sounds finite to me.

    Read the article

  • What is your preferred tool stack for PHP development in the Windows Environment?

    - by Tim Visher
    I have been developing basic web sites for awhile now with some PHP thrown in for getting dynamic stuff done. However, I recently decided that it was time I got my hands a little dirtier so I wanted to start to play with the underpinnings of Wordpress and other such apps. I work on a Mac at home and have been using Coda for most of my editing needs and I love it. Also, to manage the services stack I use MAMP at home. However, I've begun to realize that for heavy PHP and Web work (AJAX, etc.), more is needed. I'm very interested to hear the kinds of tools more experienced web developers use on a day to day basis to manage the entire work flow. I found this article over on developer tutorials and decided to go with XAMPP (for the moment) for managing the services. However, IDE, Source Control, Debugger, etc. are all up for grabs. Anyway, your thoughts are much appreciated. And, if at all possible, try to describe your entire stack with what each tool fulfills for your development process.

    Read the article

  • Qmake does not specify a valid qt

    - by Comptrol
    After installing Qt SDK for Open Source C++ development on Mac OS by following the respective steps Note for the binary package: If you have the binary package, simply double-click on the Qt.mpkg and follow the instructions to install Qt. . Yes, that is all I have done to install Qt on MacOsX. Everything was going fine, until I run a sample application, of which compile output resulted in: No valid Qt version set. Set one in Preferences Error while building project qtilk When executing build step 'QMake' Canceled build. Then I tried to change the respective Qt version in Preferences and I hovered over the Path, I realized my mkspec isn't set: Then I tried querying qmake by qmake -query : QT_INSTALL_PREFIX:/ QT_INSTALL_DATA:/usr/local/Qt4.6 QT_INSTALL_DOCS:/Developer/Documentation/Qt QT_INSTALL_HEADERS:/usr/include QT_INSTALL_LIBS:/Library/Frameworks QT_INSTALL_BINS:/Developer/Tools/Qt QT_INSTALL_PLUGINS:/Developer/Applications/Qt/plugins QT_INSTALL_TRANSLATIONS:/Developer/Applications/Qt/translations QT_INSTALL_CONFIGURATION:/Library/Preferences/Qt QT_INSTALL_EXAMPLES:/Developer/Examples/Qt/ QT_INSTALL_DEMOS:/Developer/Examples/Qt/Demos QMAKE_MKSPECS:/usr/local/Qt4.6/mkspecs QMAKE_VERSION:2.01a QT_VERSION:4.6.2 QMAKE_MKSPECS seems to be set here?? Will setting my mkspec solve my building problem? I tried setting by typing export mkspec=macx-g++. Still, mkspec seems not to be set to anything. I am all ears waiting for your answers. Thanks in advance.

    Read the article

  • Locating memory leak in Apache httpd process, PHP/Doctrine-based application

    - by Sam
    I have a PHP application using these components: Apache 2.2.3-31 on Centos 5.4 PHP 5.2.10 Xdebug 2.0.5 with Remote Debugging enabled APC 3.0.19 Doctrine ORM for PHP 1.2.1 using Query Caching and Results Caching via APC MySQL 5.0.77 using Query Caching I've noticed that when I start up Apache, I eventually end up 10 child processes. As time goes on, each process will grow in memory until each one approaches 10% of available memory, which begins to slow the server to a crawl since together they grow to take up 100% of memory. Here is a snapshot of my top output: PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1471 apache 16 0 626m 201m 18m S 0.0 10.2 1:11.02 httpd 1470 apache 16 0 622m 198m 18m S 0.0 10.1 1:14.49 httpd 1469 apache 16 0 619m 197m 18m S 0.0 10.0 1:11.98 httpd 1462 apache 18 0 622m 197m 18m S 0.0 10.0 1:11.27 httpd 1460 apache 15 0 622m 195m 18m S 0.0 10.0 1:12.73 httpd 1459 apache 16 0 618m 191m 18m S 0.0 9.7 1:13.00 httpd 1461 apache 18 0 616m 190m 18m S 0.0 9.7 1:14.09 httpd 1468 apache 18 0 613m 190m 18m S 0.0 9.7 1:12.67 httpd 7919 apache 18 0 116m 75m 15m S 0.0 3.8 0:19.86 httpd 9486 apache 16 0 97.7m 56m 14m S 0.0 2.9 0:13.51 httpd I have no long-running scripts (they all terminate eventually, the longest being maybe 2 minutes long), and I am working under the assumption that once each script terminates, the memory it uses gets deallocated. (Maybe someone can correct me on that). My hunch is that it could be APC, since it stores data between requests, but at the same time, it seems weird that it would store data inside the httpd process. How can I track down which part of my app is causing the memory leak? What tools can I use to see how the memory usage is growing inside the httpd process and what is contributing to it?

    Read the article

  • Is it possible that a single-threaded program is executed simultaneously on more than one CPU core?

    - by Wolfgang Plaschg
    When I run a single-threaded program that i have written on my quad core Intel i can see in the Windows Task Manager that actually all four cores of my CPU are more or less active. One core is more active than the other three, but there is also activity on those. There's no other program (besided the OS kernel of course) running that would be plausible for that activitiy. And when I close my program all activity an all cores drops down to nearly zero. All is left is a little "noise" on the cores, so I'm pretty sure all the visible activity comes directly or indirectly (like invoking system routines) from my program. Is it possible that the OS or the cores themselves try to balance some code or execution on all four cores, even it's not a multithreaded program? Do you have any links that documents this technique? Some infos to the program: It's a console app written in Qt, the Task Manager states that only one thread is running. Maybe Qt uses threads, but I don't use signals or slots, nor any GUI. Link to Task Manager screenshot: http://img97.imageshack.us/img97/6403/taskmanager.png This question is language agnostic and not tied to Qt/C++, i just want to know if Windows or Intel do to balance also single-threaded code on all cores. If they do, how does this technique work? All I can think of is, that kernel routines like reading from disk etc. is scheduled on all cores, but this won't improve performance significantly since the code still has to run synchronous to the kernel api calls. EDIT Do you know any tools to do a better analysis of single and/or multi-threaded programs than the poor Windows Task Manager?

    Read the article

  • Pros/Cons of MySQL vs Postgresql for production Ruby on Rails environment?

    - by cakeforcerberus
    I will soon be switching from sqlite3 to either postgres or mysql. What should I consider when making this decision? Is mysql more suited for Rails than postgres in some areas and/or vice versa? Or, as I somewhat suspect, does it not really matter either way? Another factor that might play into my decision is the availability of tools to data pump my test data from the sqlite3 db to my new one. Is there anything that ActiveRecord provides natively to do this or any decent plugins/gems to help with this task? BONUS: How do I pronounce "Postgresql" and sound like I know what I'm talking about? :) Thanks Greg Smith for providing the following link that shows the most common pronunciations: http://www.postgresql.org/community/survey.33 UPDATE: Reference this question for more: http://stackoverflow.com/questions/110927/do-you-recommend-postgresql-over-mysql FYI: I ended up using MySQL. There is a neat plugin called yamldb that really saved me some time with the data transfer from my sqlite db to my new mysql one. Instructions on how to install and use it can be found here: http://accidentaltechnologist.com/ruby/change-databases-in-rails-with-yamldb/ Thanks Tom

    Read the article

  • Sorting, Filtering and Paging in ASP.NET MVC

    - by ali62b
    What is the best approach to implement these features and which part of project would involved? I see some example of JavaScript grids, but I'm talking about a general approach which best fits the MVC architecture. I've considered configuring routes and models to implement these features but I don't have a clear idea that if this is the right approach to implementing such features. On the one hand, I think if we put logic in routes (item/page/sort/), we would have benefits like bookmarking and avoiding JavaScript. On the other hand if we use JavaScript grids, we can have behavior like the old school grid views in ASP.NET web forms. I find that using HTML helpers may be useful for paging, but have no idea if they are good for sorting or not. I've looked at jQuery, tableSorter and quick search plug-ins, but they work just on the currently-fetched data and won't help in real sorting and filtering that may need to touch the database. I have some thoughts on using these tools side by side with AJAX to get something which works, but I have no idea if there are similar efforts done yet anywhere. Another approach I looked at was using Dynamic Data on web forms, but I didn't find any suggestions out there as to whether or not it is a good idea to integrate MVC and DD. I know implementing filtering and sorting for an individual case is simple (although it has some issues like using Dynamic LINQ, which is not yet a standard approach), but creating a sorting or filtering tool which works in all cases is the idea I'm looking for. (Maybe this is because I want have something in hand when web form developers are wondering why I'm writing same code each time I want to implement a sort scenario for different Entities).

    Read the article

  • Kohana 3, themes outside application.

    - by Marek
    Hi all I read http://forum.kohanaframework.org/comments.php?DiscussionID=5744&page=1#Item_0 and I want to use similar solution, but with db. In my site controller after(): $theme = $page->get_theme_name(); //Orange Kohana::set_module_path('themes', Kohana::get_module_path('themes').'/'.$theme); $this->template = View::factory('layout') I checked with firebug: fire::log(Kohana::get_module_path('themes')); // D:\tools\xampp\htdocs\kohana\themes/Orange I am sure that path exists. I have directly in 'Orange' folder 'views' folder with layout.php file. But I am getting: The requested view layout could not be found Extended Kohana_Core is just: public static function get_module_path($module_key) { return self::$_modules[$module_key]; } public static function set_module_path($module_key, $path) { self::$_modules[$module_key] = $path; } Could anybody help me with solving that issue? Maybe it is a .htaccess problem: # Turn on URL rewriting RewriteEngine On # Put your installation directory here: # If your URL is www.example.com/kohana/, use /kohana/ # If your URL is www.example.com/, use / RewriteBase /kohana/ # Protect application and system files from being viewed RewriteCond $1 ^(application|system|modules) # Rewrite to index.php/access_denied/URL RewriteRule ^(.*)$ / [PT,L] RewriteRule ^(media) - [PT,L] RewriteRule ^(themes) - [PT,L] # Allow these directories and files to be displayed directly: # - index.php (DO NOT FORGET THIS!) # - robots.txt # - favicon.ico # - Any file inside of the images/, js/, or css/ directories RewriteCond $1 ^(index\.php|robots\.txt|favicon\.ico|static) # No rewriting RewriteRule ^(.*)$ - [PT,L] # Rewrite all other URLs to index.php/URL RewriteRule ^(.*)$ index.php/$1 [PT,L] Could somebody help? What I am doing wrong? Regards

    Read the article

  • CSS Parser - Insert mtimes

    - by brad
    What command line tool can I use to automatically insert mtimes into urls in my css files for the purposes of breaking the cache? /* before */ .example { background: url(example.jpg); } /* after */ .example { background: url(example.jpg?1271298451); } Also, I would like this tool to spit out the latest mtime as the css files mtime. (If the css file is still cached then the new urls will not get to the client.) In searching the web, I have found very few tools that can do this. I am even considering rolling my own, but have found very little in the way of css parsers that are actively maintained. A candidate should be: fast (I don't want to wait 30 seconds on deployment) command line accessible (something like "cat foo.css bar.css | cssmtime out.css") What I've found so Far yui compressor - initially I thought I would extend the yui compressor to do this, but found that it is implemented as a bunch of regex's and not a parser. csstidy - last release was in 2007 and development has been suspended, but does have an option for inserting mtimes (also written in php, something I have no experience in) cssutils - python sac implementation - seems to be actively maintained, but also seems like overkill for my needs. Also, written in python which I have experience with csspool - ruby sac implementation - I don't know much ruby, but would like to learn other sac implementations - There are several java implementations, and a c implementation neither of which I know much about What's your experience? Have you used any of these libraries? Was the experience positive? Would you recommend I go with them for my purposes?

    Read the article

  • Powershell Regex help in extracting text between strings

    - by vivekeviv
    i Have an arguments like the one below which i pass to powershell script -arg1 -abc -def -arg2 -ghi -jkl -arg3 -123 -234 Now i need to extract three strings without any whitespace string 1: "-abc -def" string 2: "-ghi -jkl" string 3: "-123 -234" i figured this expression could do it. But this doesnt seem to work. $args -match '-arg1(?'arg1'.*?) -arg3(?'arg3'.*?) -arg3(?'arg3'.*)'. THis should return $matches['arg1'] etc. So whats wrong in above expression. Why do i get an error as shown below runScript.ps1 -arg1 -abc -def -arg2 -ghi -jkl -arg3 -123 -234 Unexpected token 'arg1'.?) -arg2 (?'arg2'.?) -arg3 (?'arg3'.)'' in expression or statement. At G:\powershell\tools\powershell\runTest.ps1:1 char:71 + $args -match '-arg1 (?'arg1'.?) -arg2 (?'arg2'.?) -arg3 (?'arg3'.)' <<<< + CategoryInfo : ParserError: (arg1'.?) -arg2...g3 (?'arg3'.)':String) [], ParseException + FullyQualifiedErrorId : UnexpectedToken and also the second question is how do i make arg1 or arg2 or arg3 optional? The argument to script can be -arg2 -def -ghi. I'll take some default values for arg(1|2|3) that is not mentioned. Thanks

    Read the article

  • ASP.NET MVC 2: How to write this Linq SQL as a Dynamic Query (using strings)?

    - by Dr. Zim
    Skip to the "specific question" as needed. Some background: The scenario: I have a set of products with a "drill down" filter (Query Object) populated with DDLs. Each progressive DDL selection will further limit the product list as well as what options are left for the DDLs. For example, selecting a hammer out of tools limits the Product Sizes to only show hammer sizes. Current setup: I created a query object, sent it to a repository, and fed each option to a SQL "table valued function" where null values represent "get all products". I consider this a good effort, but far from DDD acceptable. I want to avoid any "programming" in SQL, hopefully doing everything with a repository. Comments on this topic would be appreciated. Specific question: How would I rewrite this query as a Dynamic Query? A link to something like 101 Linq Examples would be fantastic, but with a Dynamic Query scope. I really want to pass to this method the field in quotes "" for which I want a list of options and how many products have that option. (from p in db.Products group p by p.ProductSize into g select new Category { PropertyType = g.Key, Count = g.Count() }).Distinct(); Each DDL option will have "The selection (21)" where the (21) is the quantity of products that have that attribute. Upon selecting an option, all other remaining DDLs will update with the remaining options and counts.

    Read the article

  • Silverlight 4 seems like starving of memory

    - by Marco
    I have been playing a bit with Silverlight and try to port my Silverlight 3.0 application to Silverlight 4.0. My application loads different XAP files and upon a user request create an instance of a Xaml user control and adds it to the main container, in a sort of MEF approach in order I can have an extensible and pluggable application. The application is pretty huge and to keep acceptable the performances and the initial loading I have built up some helper classes to load in the background all pages and user controls that might be used later on. On Silverlight 3.0 everything was running smoothly without any problem so far. Switching to SL 4.0 I have noticed that when the process approaches to create the instances of the user controls the layout freezes unexpectedly for a minute and sometimes for more. Looking at the task manager the memory usage of IE jumps from 50MB to 400MB and sometimes up to 1.5 GB. If the process won't take that much the layout is rendered properly even though the memory usage is still extremely high. Otherwise everything crashes due to out of memory exception. Running the same application compiled in SL3, the memory used is about 200MB when all the usercontrols are loaded. Time spent to load the application in SL3 is about 10 seconds, while it takes up to 3 mins in SL4 There are no transparencies, no opacities set, no effects and animations in the layout. User controls are instantied on the fly and added or removed in the visual tree on purpose when the user switches from one screen to another. The resources are all cleaned properly when a usercontrol is removed from the visual tree to allow the GC to operate in the background. I may do something wrong but I could not figure out where exactly nail out the source of this problem. As far as I know there is no memory profiler in SL4 that can help me out to find where to look at. But again I could not be updated on new debugging tools available.

    Read the article

  • JQuery Tool tips

    - by kwek-kwek
    I want to create a tool tip for an image with a link now I had it working one but it doesn't work with the 2nd image. Here is my Sample Code: <!-- trigger element. a regular workable link --> <a id="test" title="Name - Title">Name</a> <!-- tooltip element --> <div class="tooltip"> <div><span class="name">Name</span><br /> Title <span><a href="#">more info»</a></span></div> </div> <!-- trigger element. a regular workable link --> <a id="test2" title="Name - Title">Name</a> <!-- tooltip element --> <div class="tooltip2"> <div><span class="name">Name</span><br /> Title <span><a href="#">more info»</a></span></div> </div> and here is my script that makes it all happens: <script> // What is $(document).ready ? See: http://flowplayer.org/tools/using.html#document_ready $(document).ready(function() { // enable tooltip for "test" element. use the "slide" effect $("#test").tooltip({ effect: 'slide', offset: [50, 40] }); $("#test2").tooltip2({ effect: 'slide', offset: [50, 40] }); }); </script> but not working please help.

    Read the article

< Previous Page | 526 527 528 529 530 531 532 533 534 535 536 537  | Next Page >