Search Results

Search found 10622 results on 425 pages for 'shared hosting'.

Page 369/425 | < Previous Page | 365 366 367 368 369 370 371 372 373 374 375 376  | Next Page >

  • Port Win32 DLL hook to Linux

    - by peachykeen
    I have a program (NWShader) which hooks into a second program's OpenGL calls (NWN) to do post-processing effects and whatnot. NWShader was originally built for Windows, generally modern versions (win32), and uses both DLL exports (to get Windows to load it and grab some OpenGL functions) and Detours (to hook into other functions). I'm using the trick where Win will look in the current directory for any DLLs before checking the sysdir, so it loads mine. I have on DLL that redirects with this method: #pragma comment(linker, "/export:oldFunc=nwshader.newFunc) To send them to a different named function in my own DLL. I then do any processing and call the original function from the system DLL. I need to port NWShader to Linux (NWN exists in both flavors). As far as I can tell, what I need to make is a shared library (.so file). If this is preloaded before the NWN executable (I found a shell script to handle this), my functions will be called. The only problem is I need to call the original function (I would use various DLL dynamic loading methods for this, I think) and need to be able to do Detour-like hooking of internal functions. At the moment I'm building on Ubuntu 9.10 x64 (with the 32-bit compiler flags). I haven't been able to find much on Google to help with this, but I don't know exactly what the *nix community refers to it as. I can code C++, but I'm more used to Windows. Being OpenGL, the only part the needs modified to be compatible with Linux is the hooking code and the calls. Is there a simple and easy way to do this, or will it involve recreating Detours and dynamically loading the original function addresses?

    Read the article

  • How do I programatically add an article page to a sharepoint site?

    - by soniiic
    I've been given the task of content migration from another CMS system to SharePoint 2010. The data in the old system is fairly easy to capture and the page hierarchy is simple so I'm not worried about that. However, I am completely flummoxed about how to even create a page in code. I'm using the Microsoft.SharePoint.Client namespace as I do not have sharepoint installed on my system and am wanting to code this up as a console application and so I'm using I'm using ClientContext. (On the other hand, I am willing to go into other solutions if necessary). My end-game: To get a page uploaded into some folder hierarchy which uses a master page, has the page title in a header web part, and a big ol' content-editable web part in the body so any user can come along and edit the content. Things I've tried so far: Using FileCollection.Add() to add an aspx file to the folder "Site Pages". This renders the html in the browser but doesn't enable any features for the user to edit the page Using ListItemCollection.Add() to add a page to the site, but I didn't know what fields I needed. Also I remember it came up with a runtime error saying I should use FileCollection.Add() Uploading to 'Site Pages' instead of 'Pages' So many others... ow my head :( The only plausible thing I can see on the net is to use the PublishingPage type along with PublishingWeb. However, PublishingWeb can only be constructed from an SPWeb object which requires me to be actually hosting the sharepoint application on my workstation. If anyone can lend a hand that would be greatly appreciated :)

    Read the article

  • What is the best database structure for this scenario?

    - by Ricketts
    I have a database that is holding real estate MLS (Multiple Listing Service) data. Currently, I have a single table that holds all the listing attributes (price, address, sqft, etc.). There are several different property types (residential, commercial, rental, income, land, etc.) and each property type share a majority of the attributes, but there are a few that are unique to that property type. My question is the shared attributes are in excess of 250 fields and this seems like too many fields to have in a single table. My thought is I could break them out into an EAV (Entity-Attribute-Value) format, but I've read many bad things about that and it would make running queries a real pain as any of the 250 fields could be searched on. If I were to go that route, I'd literally have to pull all the data out of the EAV table, grouped by listing id, merge it on the application side, then run my query against the in memory object collection. This also does not seem very efficient. I am looking for some ideas or recommendations on which way to proceed. Perhaps the 250+ field table is the only way to proceed. Just as a note, I'm using SQL Server 2012, .NET 4.5 w/ Entity Framework 5, C# and data is passed to asp.net web application via WCF service. Thanks in advance.

    Read the article

  • How to best configure a central repository/multiple central repositories for Mercurial?

    - by Mario
    I am new to Mercurial and trying to figure out if it could replace SVN. Everyone I work with has used SVN, CVS and VSS (shiver), so this could be quite a large change. I have been very interested after reading about its merge and branch capability, but have a few reservations. We are currently on SVN, and have one central repository. From my reading, it seems as though there is no ONE central repository for all projects when using Mercurial. NOTE: We consider each project a separate logical set of code, or a Visual Studio Solution. It runs on its own. We have around 60 separate projects in our one central SVN repository. After reading about Mercurial it seems to me that I have to create 60 separate central repositories for each one of these projects on the server. QUESTION #1: Should I create a single repository for each project? If yes, then I am worried about configuring and hosting 60 separate central Mercurial servers. I started thinking I could configure one file, but it seems as if each repository must be individually configured using the “C:...\MyRepository.hg\hgrc” file (Windows install). It also seems as I have to run 60 servers (hg serve), I would assume on different ports. QUESTION #2: If the answer to question 1 is yes, there should be a single central repository for each project, then how have people managed many multiple repositories? Finally, I haven’t looked into moving all history and changes from one SVN repository to a bunch of separate Mercurial repositories, but would appreciate any comments from someone who has done this (or if it is even possible).

    Read the article

  • How would you go about parsing markdown?

    - by John Leidegren
    You can find the syntax here. The thing is, the source that follows with the download is written in perl. Which I have no intentions of honoring. It is riddled with regex and it relies on MD5 hashes to escape certain characters. Something is just wrong about that! I'm about to hard code a parser for markdown and I'm wonder if someone had some experience with this? Edit: If you don't have anything meaningful to say about the actual parsing of markdown, spare me the time. (This might sound harsh, but yes, I'm looking for insight, not a solution i.e. third-party library). To help a bit with the answers, regex are meant to identify patterns! NOT to parse an entire grammar. That people consider doing so is foobar. If you think about markdown, it's fundamentally based around the concept of paragraphs. As such, a reasonable approach might be to split the input into paragraphs. There are many kinds of paragraphs e.g. heading, text, list, blockquote, code. The challenge is thus to identify these paragraphs and in what context they occur. I'll be back with a solution, once I find it's worthy to be shared.

    Read the article

  • How to mult-thread this?

    - by WilliamKF
    I wish to have two threads. The first thread1 occasionally calls the following pseudo function: void waitForThread2() { if (thread2 is not idle) { return; } notifyThread2IamReady(); while (thread2IsExclusive) { } } The second thread2 is forever in the following pseudo loop: for (;;) { Notify thread1 I am idle. while (!thread1IsReady()) { } Notify thread1 I am exclusive. Do some work while thread1 is blocked. Notify thread1 I am busy. Do some work in parallel with thread1. } What is the best way to write this such that both thread1 and thread2 are kept as busy as possible on a machine with multiple cores. I would like to avoid long delays between notification in one thread and detection by the other. I tried using pthread condition variables but found the delay between thread2 doing 'notify thread1 I am busy' and the loop in waitForThread2() on thear2IsExclusive() can be up to almost one second delay. I then tried using a volatile sig_atomic_t shared variable to control the same, but something is going wrong, so I must not be doing it correctly.

    Read the article

  • Adding a minimum display time for Silverlight splash screen.

    - by David
    When hosting a silverlight application on a webpage it is possible to use the splashscreensource parameter to specify a simple Silverlight 1.0 (xaml+javascript) control to be displayed while the real xap file is downloaded, and which can receive notification of the downloads progress through onSourceDownloadProgressChanged. If the xap file is in cache, the splash screen is not shown (and if the download only takes 1 second, the splash screen will only be shown for 1 second). I know this is not best practice in general, but I am looking for a way to specify a minimum display time for the splash screen - even if the xap cached or the download is fast, the splash screen would remain up for at least, let's say, 5 seconds (for example to show a required legal disclaimer, corporate identity mark or other bug). I do want to do it in the splash screen exclusively (rather then in the main xap) as I want it to be clean and uninterupted (for example a sound bug) and shown to the user as soon as they open the page, rather then after the download (which could take anywhere from 1 to 20+ seconds). I'd prefer not to accomplish this with preloading - replacing the splash screen with a full Silverlight xap application (with it's own loading screen), which then programmably loads and displays the full xap after a minimum wait time.

    Read the article

  • PHP import functions

    - by ninuhadida
    Hi, I'm trying to find the best pragmatic approach to import functions on the fly... let me explain. Say I have a directory called functions which has these files: array_select.func.php stat_mediam.func.php stat_mean.func.php ..... I would like to: load each individual file (which has a function defined inside) and use it just like an internal php function.. such as array_pop(), array_shift(), etc. Once I stumbled on a tutorial (which I can't find again now) that compiled user defined functions as part of a PHP installation.. Although that's not a very good solution because on shared/reseller hosting you can't recompile the PHP installation. I don't want to have conflicts with future versions of PHP / other extensions, i.e. if a function named X by me, is suddenly part of the internal php functions (even though it might not have the same functionality per se) I don't want PHP to throw a fatal error because of this and fail miserably. So the best method that I can think of is to check if a function is defined, using function_exists(), if so throw a notice so that it's easy to track in the log files, otherwise define the function. However that will probably translate to having a lot of include/require statement in other files where I need such a function, which I don't really like. Or possibly, read the directory and loop over each *.func.php file and include_once. Though I find this a bit ugly. The question is, have you ever stumbled upon some source code which handled such a case? How was it implemented? Did you ever do something similar? I need as much ideas as possible! :)

    Read the article

  • Grails Deployment - Fastest way to get deployed?

    - by gav
    Hi All, If anyone has or is running a Grails application on their server I would appreciate some details on where to go after creating the WAR. Background I chose grails because with Google App Engine and the App Engine Plugin deployment should have been trivial. This issue is that there is a bug which makes any application pretty much unusable, I wish this had been more prominent so I didn't have to get to the point of seeing the error myself before I was aware of it. The next option was EC2 and the Cloud Tools plugin, it seems Cloud Tools worked with grails 1.0 but doesn't work with the current 1.2.1 due to issues getting the JAR dependencies. It also seems that Cloud Tools has been succeeded by Cloud Foundry which is in beta, will cost extra money and has limited places (I signed up but haven't got an e-mail). Question My application is painfully trivial, it has a small load, small data requirements and doesn't need to scale past 5 users. How can I deploy my grails app as quickly and painlessly as possible? Specifically: Are there any hosting companies that have tomcat installed on their servers out of the box that I can sign up to and use that will just work? Do you know of any simple tutorials for getting a grails application deployed to EC2 without Cloud Tools? Thanks in advance, Gav Side-note: I picked grails because of good advice from SO, it should have been a very short time from development to deployed product except the tools for auto-deployment aren't that mature and I've never configured a server before.

    Read the article

  • Using the Search API with Sharepoint Foundation 2010 - 0 results

    - by MB
    I am a sharepoint newbee and am having trouble getting any search results to return using the search API in Sharepoint 2010 Foundation. Here are the steps I have taken so far. The Service Sharepoint Foundation Search v4 is running and logged in as Local Service Under Team Site - Site Settings - Search and Offline Availability, Indexing Site Content is enabled. Running the PowerShell script Get-SPSearchServiceInstance returns TypeName : SharePoint Foundation Search Description : Search index file on the search server Id : 91e01ce1-016e-44e0-a938-035d37613b70 Server : SPServer Name=V-SP2010 Service : SPSearchService Name=SPSearch4 IndexLocation : C:\Program Files\Common Files\Microsoft Shared\Web Server Exten sions\14\Data\Applications ProxyType : Default Status : Online When I do a search using the search textbox on the team site I get a results as I would expect. Now, when I try to duplicate the search results using the Search API I either receive an error or 0 results. Here is some sample code: using Microsoft.SharePoint.Search.Query; using (var site = new SPSite(_sharepointUrl, token)) { // FullTextSqlQuery fullTextSqlQuery = new FullTextSqlQuery(site) { QueryText = String.Format("SELECT Title, SiteName, Path FROM Scope() WHERE \"scope\"='All Sites' AND CONTAINS('\"{0}\"')", searchPhrase), //QueryText = String.Format("SELECT Title, SiteName, Path FROM Scope()", searchPhrase), TrimDuplicates = true, StartRow = 0, RowLimit = 200, ResultTypes = ResultType.RelevantResults //IgnoreAllNoiseQuery = false }; ResultTableCollection resultTableCollection = fullTextSqlQuery.Execute(); ResultTable result = resultTableCollection[ResultType.RelevantResults]; DataTable tbl = new DataTable(); tbl.Load(result, LoadOption.OverwriteChanges); } When the scope is set to All Sites I retrieve an error about the search scope not being available. Other search just return 0 results. Any ideas about what I am doing wrong?

    Read the article

  • How to make images hosted on Amazon S3 less public but not completely private?

    - by Jay Godse
    I fired up a sample application that uses Amazon S3 for image hosting. I managed to coax it into working. The application is hosted at github.com. The application lets you create users with a profile photo. When you upload the photo, the web application stores it on Amazon S3 instead of your local file system. (Very important if you host at heroku.com) However, when I did a "view source" in the browser of the page I noticed that the URL of the picture was an Amazon S3 URL in the S3 bucket that I assigned to the app. I cut & pasted the URL and was able to view the picture in the same browser, and in in another browser in which I had no open sessions to my web app or to Amazon S3. Is there any way that I could restrict access to that URL (and image) so that it is accessible only to browsers that are logged into my applications? Most of the information I found about Amazon ACLs only talk about access for only the owner or to groups of users authenticated with Amazon or AmazonS3, or to everybody anonymously.

    Read the article

  • Linker, Libraries & Directories Information

    - by m00st
    I've finished both my C++ 1/2 classes and we did not cover anything on Linking to libraries or adding additional libraries to C++ code. I've been having a hay-day trying to figure this out; I've been unable to find basic information linking to objects. Initially I thought the problem was the IDE (Netbeans; and Code::Blocks). However I've been unable to get wxWidgets and GTKMM setup. Can someone point me in the right direction on the terminology and basic information about #including files and linking files in a Cpp application? Basically I want/need to know everything in regards to this process. The difference between .dll, .lib, .o, .lib.a, .dll.a. The difference between a .h and a "library" (.dll, .lib correct?) I understand I need to read the compiler documentation I am using; however all compilers (that I know of) use linker and headers; I need to learn this information. Please point me in the right direction! :] So far on my quest I've found out: Linker links libraries already compiled to your project. .a files are static libraries (.lib in windows) .dll in windows is a shared library (.so in *nix) Thanks

    Read the article

  • Eclipse - Import existing mult-rep CVS project folder

    - by iQ
    Hey guys, Wondering if anyone can help me out with eclipse in terms of importing an existing CVS managed project. I am currently trying to shift my work on to the eclipse IDE. Some details about my project and environment below. I'm working in Linux Ubuntu, the project folder is located on a mounted shared network drive, I have installed the "Eclipse CVS Client" plug-in for my version of eclipse (helios). I've tried many ways for eclipse to use my existing folder as a project and recognize the CVS data in the CVS folders. I have done the following options: Created a new project, selected existing source, located my project folder and clicked OK to finish creating. In the end the CVS files weren't automatically read. Did the same as above and after project creation I wen to the option "project menu-team-share project", it asks me to choose a repository and doesn't automatically find the CVS information in the subfolders. If your wondering I have set-up both repositories in my eclipse and can browse the repositories through the CVS browser. My project directory layout is like this: +-Project Folder (no CVS folder at this level) +---Repo A folder +-----CVS meta-info folder is INSIDE, along with all checked out files from Repo A + +---Repo B folder +-----CVS meta-info folder is INSIDE, along with all checked out files from Repo B + +-(couple of random files, not in CVS) Thanks for the help

    Read the article

  • Basic compile issue with QT4

    - by Cobus Kruger
    I've been trying to get a dead simple listing from a university textbook to compile with the newest QT SDK for Windows I downloaded last night. After struggling through the regular nonsense (no make.bat, need to manually add environment variables and so on) I am finally at the point where I can build. But only one of the two libraries seem to work. The .pro file I use is dead simple: SUBDIRS += utils \ dataobjects TEMPLATE = subdirs In each of these two subfolders I have the source for a library. Running QMAKE generates a makefile and running Make runs through all the preliminaries and then fails on the g++ call: g++ -enable-stdcall-fixup -Wl,-enable-auto-import -Wl,-enable-runtime-pseudo-reloc --out-implib,libdataobjects.a -shared -mthreads -Wl -Wl,--out-implib,c:\Users\Cobus\workspace\lib\libdataobjects.a -o ..\..\lib\dataobjects.dll object_script.dataobjects.Debug -L"c:\Users\Cobus\Portab~1\Qt\2010.02.1\qt\lib" -LC:\Users\Cobus\workspace\lib -lutils -lQtXmld4 -lQtGuid4 -lQtCored4 c:/users/cobus/portab~1/qt/2010.02.1/mingw/bin/../lib/gcc/mingw32/4.4.0/../../../../mingw32/bin/ld.exe: cannot find -lutils The problem seems to be right near the end of the command line, where -lutils is added, indicating that there is a library by the name of utils. While I would have expected to see that, you'll notice the library names after --out include lib in the name, so they become libutils and libdataobjects. I have tried to figure out why this is happening, to no avail. Anyone have an idea what's going on?

    Read the article

  • Unable to render php files in browser

    - by p1
    Hello, I am very new to php, and I am trying to develop a facebook application using php. I am using Joyent as my hosting platform. Currently, I am trying to do some simple scripts in php and then build on them. However I am unable to see any php files being rendered properly in my application. For eg: I have a simple script called phpinfo.php: If I execute this on terminal like php phpinfo.php , I can see all the configurations. However if I try to access the same file as http://xxxxxx.facebook.joyent.us/phpinfo.php, I get : Internal Server Error The server encountered an internal error or misconfiguration and was unable to complete your request. Even if I rename this file to index.php its still the same. However I am able to access other html files [index.html] on the same location . These are some of my php settings: These are some of the settings: [fbkusoni:~/web/public] aafhe7vh$ php phpinfo.php | grep On allow_url_fopen = On = On auto_globals_jit = On = On enable_dl = On = On file_uploads = On = On ignore_repeated_errors = On = On ignore_repeated_source = On = On implicit_flush = On = On log_errors = On = On register_argc_argv = On = On report_memleaks = On = On y2k_compliance = On = On Multibyte regex (oniguruma) backtrack check = On mysql.allow_persistent = On = On session.bug_compat_warn = On = On session.use_cookies = On = On suhosin.cookie.cryptdocroot = On = On suhosin.cookie.cryptua = On = On suhosin.mt_srand.ignore = On = On suhosin.protectkey = On = On suhosin.server.encode = On = On suhosin.server.strip = On = On suhosin.session.cryptdocroot = On = On suhosin.session.cryptua = On = On suhosin.session.encrypt = On = On suhosin.srand.ignore = On = On suhosin.stealth = On = On The answer might be very naive, but I am just trying to get started and looking for any suggestions regarding this and also using Joyent and cakephp to develop facebook applications. Thanks.

    Read the article

  • .NET Graphics.ScaleTransform converts print job to bitmap. Any other way to scale text?

    - by Philip Dunaway
    I'm using Graphics.ScaleTransform to stretch lines of text so they fit the width of the page, and then printing that page. However, this converts the print job to a bitmap - for a print with many pages this causes the size of the print job to rise to obscene proportions, and slows down printing immensely. If I don't scale like this, the print job remains very small as it is just sending text print commands to the printer. My question is, is there any way other than using Graphics.ScaleTransform to stretch the width of the text? Sample code to demonstrate this is below (would be called with Print.Test(True) and Print.Test(False) to show the effects of scaling on print job): Imports System.Drawing Imports System.Drawing.Printing Imports System.Drawing.Imaging Public Class Print Dim FixedFont As Font Dim Area As RectangleF Dim CharHeight As Double Dim CharWidth As Double Dim Scale As Boolean Const CharsAcross = 80 Const CharsDown = 66 Const TestString = "!""#$%&'()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_`abcdefghijklmnopqrstuvwxyz{|}~" Private Sub PagePrinter(ByVal sender As Object, ByVal e As PrintPageEventArgs) Dim G As Graphics = e.Graphics If Scale Then Dim ws = Area.Width / G.MeasureString(Space(CharsAcross).Replace(" ", "X"), FixedFont).Width G.ScaleTransform(ws, 1) End If For CurrentLine = 1 To CharsDown G.DrawString(Mid(TestString & TestString & TestString, CurrentLine, CharsAcross), FixedFont, Brushes.Black, 0, Convert.ToSingle(CharHeight * (CurrentLine - 1))) Next e.HasMorePages = False End Sub Public Shared Sub Test(ByVal Scale As Boolean) Dim OutputDocument As New PrintDocument With OutputDocument Dim DP As New Print .PrintController = New StandardPrintController .DefaultPageSettings.Landscape = False DP.Area = .DefaultPageSettings.PrintableArea DP.CharHeight = DP.Area.Height / CharsDown DP.CharWidth = DP.Area.Width / CharsAcross DP.Scale = Scale DP.FixedFont = New Font("Courier New", DP.CharHeight / 100, FontStyle.Regular, GraphicsUnit.Inch) .DocumentName = "Test print (with" & IIf(Scale, "", "out") & " scaling)" AddHandler .PrintPage, AddressOf DP.PagePrinter .Print() End With End Sub End Class

    Read the article

  • 2 different routes on one page?

    - by Dejan.S
    Hi I'm pretty new with MVC2 or MVC in general. If it's one thing I get caught up with it's routes. Like now I got this scenario. Im going from the regular site to Admin. My navigation is the same partialview on both I just do a check which data to render something like this. <% if (!Request.RawUrl.Contains("Admin")){%> <% foreach (var site in Model) { %> <%= Html.MenuItem(site.BelongSite, "Sida", "Site", site.BelongSite) %> | <%} %> <%} else {%> <%= Html.ActionLink("Konfig", "Konfigurera", "Admin") %> <% } %> My route looks like this routes.MapRoute( "Admin", // Route name "Admin/{action}/{name}", // URL with parameters new { controller = "Admin", action = "konfigurera", name = UrlParameter.Optional } // Parameter defaults ); On my View called Konfigurera I got Edit sites and they use the route above and it works great. The navigation tho dont get no action assigned to it. It's just <a href='Admin/'> The navigation is in the shared folder, and it is a strongly typed. Any Ideas? I been struggling with this for about a hour now Thanks for any input

    Read the article

  • ASP.NET error on Bitmap.Save "Exception (0x80004005): A generic error occurred in GDI+."

    - by Batu
    Hi, I have a function which first reads an image from disk, resizes it and then saves to another directory. when i use the Bitmap.Save(directory + theimagename) it returns the error as i stated in the question title. i checked the directory is right, and the given image name doesn't exist in that dir. what is weird, is that the same code works great on the local machine. but when i upload it to my shared server. it just doesn't work. the code is below. bmpOut = new Bitmap(Size, Size); Graphics g = Graphics.FromImage(bmpOut); g.InterpolationMode = System.Drawing.Drawing2D.InterpolationMode.HighQualityBicubic; g.FillRectangle(Brushes.White, 0, 0, Size, Size); int topBottomPadding = 0; int leftRightPadding = 0; if (Size > lnNewWidth + 1) leftRightPadding = Convert.ToInt32((Size - lnNewWidth) / 2); else if (Size > lnNewHeight + 1) topBottomPadding = Convert.ToInt32((Size - lnNewHeight) / 2); g.DrawImage(loBMP, leftRightPadding, topBottomPadding, lnNewWidth, lnNewHeight); Bitmap bmp = new Bitmap(bmpOut); if (bmp != null) bmp.Save(ResizedOutput); bmp.Dispose(); bmpOut.Dispose(); g.Dispose(); loBMP.Dispose(); stack trace: [ExternalException (0x80004005): A generic error occurred in GDI+.] System.Drawing.Image.Save(String filename, ImageCodecInfo encoder, EncoderParameters encoderParams) +377630 System.Drawing.Image.Save(String filename, ImageFormat format) +69 System.Drawing.Image.Save(String filename) +25 Utilities.ResizeImage(String fileName, String mode) in c:\inetpub\vhosts\batuhanakcay.com\httpdocs\App_Code\Utilities.cs:181 Link.ToProductImage(String fileName) in c:\inetpub\vhosts\batuhanakcay.com\httpdocs\App_Code\Link.cs:79 Product.PopulateControls(ProductDetails pd) in c:\inetpub\vhosts\batuhanakcay.com\httpdocs\Product.aspx.cs:37 Product.Page_Load(Object sender, EventArgs e) in c:\inetpub\vhosts\batuhanakcay.com\httpdocs\Product.aspx.cs:20

    Read the article

  • Authentification-None for one folder(page) when the project is under FormsAuthentifications

    - by Sirius Lampochkin
    I have a WebApplication on asp.net 2.0 with namespace Admin. I have Form Authentification mode for the project. <authentication mode="Forms"> <forms name="ASP_XML_Form" loginUrl="Login.aspx" protection="All" timeout="30" path="/" requireSSL="false" slidingExpiration="true" cookieless="AutoDetect"> </forms> </authentication> Now, I try to share one folder (one inside page) for not Authentificatied users: <location path="Recovery"> <system.web> <roleManager enabled="false" > </roleManager> <authentication mode="None"> </authentication> <authorization> <allow users="*" /> </authorization> <httpHandlers> <remove verb="GET" path="image.aspx" /> <remove verb="GET" path="css.aspx" /> </httpHandlers> </system.web> </location> But when I create the page inside the shared folder, it can't get access to the assembly. And I see the error like this: Could not load file or assembly 'Admin' or one of its dependencies. The system cannot find the file specified. It also shows me the error: ASP.NET runtime error: It is an error to use a section registered as allowDefinition='MachineToApplication' beyond application level. This error can be caused by a virtual directory not being configured as an application in IIS. Does anybody know how to share (Authentification None) one folder(page) when the project is under FormsAuthentifications?

    Read the article

  • How to do cleanup reliably in python?

    - by Cheery
    I have some ctypes bindings, and for each body.New I should call body.Free. The library I'm binding doesn't have allocation routines insulated out from the rest of the code (they can be called about anywhere there), and to use couple of useful features I need to make cyclic references. I think It'd solve if I'd find a reliable way to hook destructor to an object. (weakrefs would help if they'd give me the callback just before the data is dropped. So obviously this code megafails when I put in velocity_func: class Body(object): def __init__(self, mass, inertia): self._body = body.New(mass, inertia) def __del__(self): print '__del__ %r' % self if body: body.Free(self._body) ... def set_velocity_func(self, func): self._body.contents.velocity_func = ctypes_wrapping(func) I also tried to solve it through weakrefs, with those the things seem getting just worse, just only largely more unpredictable. Even if I don't put in the velocity_func, there will appear cycles at least then when I do this: class Toy(object): def __init__(self, body): self.body.owner = self ... def collision(a, b, contacts): whatever(a.body.owner) So how to make sure Structures will get garbage collected, even if they are allocated/freed by the shared library? There's repository if you are interested about more details: http://bitbucket.org/cheery/ctypes-chipmunk/

    Read the article

  • Web CMS That Outputs to Flat Static Pages (.html) via FTP to Remote Server?

    - by Sootah
    I have a web app project that I will be starting to work on shortly. One of the features included is going to be a content management system where users can add content and then that content will be combined with a template and then output as a regular .html file. This .html file would then be FTPed to their own web host. As I've always believed in not reinventing the wheel I figured I'd see if there are any quality customizable CMSes out there that do this already do this. For instance, Blogger.com allows you to post all of your content to your account there; but offers the option to let you use your own hosting. Any time you publish a new article then a new .html page is generated (as well as an updated index page with links to the new article) and then the updated content is FTPed to your own server. What I would like is something like this that I can modify to more closely suit my needs. Required Features: Able to host on my own server Written in PHP Users add content through their account, then when posted it is FTPed as .html to their server Any appropriate pages are also updated to link to the new content (like the index page or whatnot) Templateable Customizable Optional (but very much desired) features: Written in CodeIgniter or a similar PHP framework While CodeIgniter isn't strictly required, I would very much prefer it. It speeds up development time and makes things much easier to implement. So - any suggestions? I've stumbled across a few CMSes that push to remote servers as static pages, but the ones I've found all are hosted on the developers servers which means that I cannot modify it at all. Thanks again fellow StackOverflowians! -Sootah

    Read the article

  • LGPL library with plugins of varied licenses

    - by Chris
    Note: "Plugins" here refers to shared objects that are accessed via dlopen() and friends. I'm writing a library that I'm planning on releasing under the LGPL. Its functionality can be extended (supporting new audio file formats, specifically) through plugins. I'm planning on creating an exception to the LGPL for this library so that plugins can be released under any license. So far so good. I've written a number of plugins already, some of which use LGPL and some of which use GPL libraries. I'm wary of releasing them with the main library, however, due to licensing issues. The LGPL-based ones would generally be fine, but for my "any license" clause. Would distributing these LGPL-based plugins with the library require the consent of the other license holders to create this exception? Along the same lines, would the inclusion of GPL-based plugins with my library force the whole thing to go GPL? I could also release the plugins separately. The advantage, I presume, is that the plugins an d library will now not be distributed together, creating more separation. But this seems to be no different, really, in the end. Boiled down: Can I include, with my LGPL library, plugins of varied licenses? If not, is it really any different releasing them separately? And if so, there's no real need to create an exception for non-LGPL plugins, is there? It's LGPL or nothing. I'd prefer asking a lawyer, of course, but this is just a hobby and I can't afford to hire a lawyer when I don't expect or want monetary compensation. I'm just hoping others have been in similar situations and have insight.

    Read the article

  • Single developer, project organization

    - by poke
    I am looking for a good (and free) way to organize some of my personal projects. I am saying "organize" because I'm not sure, if the standard project management software solutions are exactly what I am looking for and especially something what I, as a single developer, need. In general, I just want to keep my progress of my projects organized in some way. I would like to be able to keep track of milestones and split those into multiple smaller tasks, so I can keep track of my progress. So some task/issue based system would probably be good, especially as I also want to keep track of issues/bugs with specific versions (although I alone will create those issues). I am and will be the only developer on those projects, so it doesn't matter if the software is offline or online, and I also don't need any collaboration features (like commenting on things, or assigning tasks to other developers etc). But if there is a good software that fits my needs, and in addition it has those things, I don't really care. After all it's easy enough to not use available features. Many online solutions also offer integrated code hosting. I am using git internally, but I don't plan to push any of the code, so such a feature is not needed either. In case of online solutions however I would like the projects to be closed to the public (some of the online utilities only offer open source projects for free and require payments for private projects). I have looked at some project management solutions already, I also read some similar questions here on SO. But given that I'm a single developer, my focus is probably a bit different as when others ask for a huge distributed software that supports many developers and different collaboration features. Some standard answers such as Trac (which also only supports one project), Redmine and FogBUGZ look interesting, but are a bit off my interest (although you may change my mind on that :P). Currently, I'm looking at Indefero which doesn't look too bad. But what do you think?

    Read the article

  • How do I track down sporadic ASP.NET performance problems in a production environment?

    - by Steve Wortham
    I've had sporadic performance problems with my website for awhile now. 90% of the time the site is very fast. But occasionally it is just really, really slow. I mean like 5-10 seconds load time kind of slow. I thought I had narrowed it down to the server I was on so I migrated everything to a new dedicated server from a completely different web hosting company. But the problems continue. I guess what I'm looking for is a good tool that'll help me track down the problem, because it's clearly not the hardware. I'd like to be able to log certain events in my ASP.NET code and have that same logger also track server performance/resources at the time. If I can then look back at the logs then I can see what exactly my website was doing at the time of extreme slowness. Is there a .NET logging system that'll allow me to make calls into it with code while simultaneously tracking performance? What would you recommend?

    Read the article

  • 3 tier application pattern suggestion

    - by Maxim Gershkovich
    I have attempted to make my first 3 tier application. In the process I have run into one problem I am yet to find an optimal solution for. Basically all my objects use an IFillable interface which forces the implementation of a sub as follows Public Sub Fill(ByVal Datareader As Data.IDataReader) Implements IFillable.Fill This sub then expects the Ids from the datareader will be identical to the properties of the object as such. Me.m_StockID = Datareader.GetGuid(Datareader.GetOrdinal("StockID")) In the end I end up with a datalayer that looks something like this. Public Shared Function GetStockByID(ByVal ConnectionString As String, ByVal StockID As Guid) As Stock Dim res As New Stock Using sqlConn As New SqlConnection(ConnectionString) sqlConn.Open() res.Fill(StockDataLayer.GetStockByIDQuery(sqlConn, StockID)) End Using Return res End Function Mostly this pattern seems to make sense. However my problem is, lets say I want to implement a property for Stock called StockBarcodeList. Under the above mentioned pattern any way I implement this property I will need to pass a connectionstring to it which obviously breaks my attempt at layer separation. Does anyone have any suggestions on how I might be able to solve this problem or am I going about this the completely wrong way? Does anyone have any suggestions on how I might improve my implementation? Please note however I am deliberately trying to avoid using the dataset in any form.

    Read the article

< Previous Page | 365 366 367 368 369 370 371 372 373 374 375 376  | Next Page >