Search Results

Search found 24931 results on 998 pages for 'information visualization'.

Page 802/998 | < Previous Page | 798 799 800 801 802 803 804 805 806 807 808 809  | Next Page >

  • Asp.net Login Status Question: It Aint Working

    - by contactmatt
    I'm starting to use Role Management in my website, and I'm current following along on the tutorial from http://www.asp.net/Learn/Security/tutorial-02-vb.aspx . I'm having a problem with the asp:LoginStatus control. It is not telling me that I am currently logged in after a successful login. This can't be true because after successfully logging in, my LoggedInTemplate is shown. The username and passwords are simply stored in a array. Heres the Login.aspx page code. Protected Sub btnLogin_Click(ByVal sender As Object, ByVal e As System.EventArgs) _ Handles btnLogin.Click ' Three valid username/password pairs: Scott/password, Jisun/password, and Sam/password. Dim users() As String = {"Scott", "Jisun", "Sam"} Dim passwords() As String = {"password", "password", "password"} For i As Integer = 0 To users.Length - 1 Dim validUsername As Boolean = (String.Compare(txtUserName.Text, users(i), True) = 0) Dim validPassword As Boolean = (String.Compare(txtPassword.Text, passwords(i), False) = 0) If validUsername AndAlso validPassword Then FormsAuthentication.RedirectFromLoginPage(txtUserName.Text, chkRemember.Checked) End If Next ' If we reach here, the user's credentials were invalid lblInvalid.Visible = True End Sub Here is the content place holder on the master page specifically designed to hold Login Information. On successfull login, the page is redirected to '/Default.aspx', and the LoggedIn Template below is shown...but the status says Log In. <asp:ContentPlaceHolder Id="LoginContent" runat="server"> <asp:LoginView ID="LoginView1" runat="server"> <LoggedInTemplate> Welcome back, <asp:LoginName ID="LoginName1" runat="server" />. </LoggedInTemplate> <AnonymousTemplate> Hello, stranger. </AnonymousTemplate> </asp:LoginView> <br /> <asp:LoginStatus ID="LoginStatus1" runat="server" LogoutAction="Redirect" LogoutPageUrl="~/Logout.aspx" /> </asp:ContentPlaceHolder> Forms authentication is enabled. I'm not sure what to do about this :o.

    Read the article

  • Custom Database integration with MOSS 2007

    - by Bob
    Hopefully someone has been down this road before and can offer some sound advice as far as which direction I should take. I am currently involved in a project in which we will be utilizing a custom database to store data extracted from excel files based on pre-established templates (to maintain consistency). We currently have a process (written in C#.Net 2008) that can extract the necessary data from the spreadsheets and import it into our custom database. What I am primarily interested in is figuring out the best method for integrating that process with our portal. What I would like to do is let SharePoint keep track of the metadata about the spreadsheet itself and let the custom database keep track of the data contained within the spreadsheet. So, one thing I need is a way to link spreadsheets from SharePoint to the custom database and vice versa. As these spreadsheets will be updated periodically, I need tried and true way of ensuring that the data remains synchronized between SharePoint and the custom database. I am also interested in finding out how to use the data from the custom database to create reports within the SharePoint portal. Any and all information will be greatly appreciated.

    Read the article

  • Getting Visual Studio macros in console app

    - by Paul Steckler
    In a Visual Studio extension, you can get the default include paths for all projects with C# code like: String dirs = dte2.get_Properties("Projects", "VCDirectories"); where dte2 is the Visual Studio application object. Usually, those directories contain macros like $(INCLUDE). You can expand those macros by looking at dte2.Solution.Projects, finding the relevant project in that collection; from the project, look at project.Configurations, find the relevant configuration, and call its Evaluate method. In VS2005/VS2008, there's a .vssettings file that contains the VCDirectories. In VS2010, there's a property sheet with the same information. A console application can just parse those files -- great. But how can you expand the macros? As a first step, I tried instantiating a VCProjectEngine object in a console app, but that just resulted in a COM failure. So I don't know how to instantiate a VCProject object in order to follow the same strategy I used in a VS extension. Where are the macro bindings stored?

    Read the article

  • Java List with Objects - find and replace (delete) entry if Object with certain attribute already ex

    - by Sophomore
    Hi there I've been working all day and I somehow can't get this probably easy task figured out - probably a lack of coffee... I have a synchronizedList where some Objects are being stored. Those objects have a field which is something like an ID. These objects carry information about a user and his current state (simplified). The point is, that I only want one object for each user. So when the state of this user changes, I'd like to remove the "old" entry and store a new one in the List. protected static class Objects{ ... long time; Object ID; ... } ... if (Objects.contains(ID)) { Objects.remove(ID); Objects.add(newObject); } else { Objects.add(newObject); } Obviously this is not the way to go but should illustrate what I'm looking for... Maybe the data structure is not the best for this purpose but any help is welcome!

    Read the article

  • multiple keys and values with google-collections

    - by flash3000
    Hello, I would like use google-collection in order to save the following file in a Hash with multiple keys and values Key1_1, Key2_1, Key3_1, data1_1, 0, 0 Key1_2, Key2_2, Key3_2, data1_2, 0, 0 Key1_3, Key2_3, Key3_3, data1_3, 0, 0 Key1_4, Key2_4, Key3_4, data1_4, 0, 0 The first three columns are the different keys and the last two integer are the two different values. I have already prepare a code which spilt the lines in chunks. import java.io.BufferedReader; import java.io.FileNotFoundException; import java.io.FileReader; import java.io.IOException; public class HashMapKey { public static void main(String[] args) throws FileNotFoundException, IOException { String inputFile = "inputData.txt"; BufferedReader br = new BufferedReader(new FileReader(inputFile)); String strLine; while ((strLine = br.readLine()) != null) { String[] line = strLine.replaceAll(" ", "").trim().split(","); for (int i = 0; i < line.length; i++) { System.out.print("[" + line[i] + "]"); } System.out.println(); } } } Unfortunately, I do not know how to save these information in google-collection? Thank you in advance. Best regards,

    Read the article

  • How to compile Python scripts for use in FORTRAN?

    - by Vincent Poirier
    Hello, Although I found many answers and discussions about this question, I am unable to find a solution particular to my situation. Here it is: I have a main program written in FORTRAN. I have been given a set of python scripts that are very useful. My goal is to access these python scripts from my main FORTRAN program. Currently, I simply call the scripts from FORTRAN as such: CALL SYSTEM ('python pyexample.py') Data is read from .dat files and written to .dat files. This is how the python scripts and the main FORTRAN program communicate to each other. I am currently running my code on my local machine. I have python installed with numpy, scipy, etc. My problem: The code needs to run on a remote server. For strictly FORTRAN code, I compile the code locally and send the executable to the server where it waits in a queue. However, the server does not have python installed. The server is being used as a number crunching station between universities and industry. Installing python along with the necessary modules on the server is not an option. This means that my “CALL SYSTEM ('python pyexample.py')” strategy no longer works. Solution?: I found some information on a couple of things in thread http://stackoverflow.com/questions/138521/is-it-feasible-to-compile-python-to-machine-code Shedskin, Psyco, Cython, Pypy, Cpython API These “modules”(? Not sure if that's what to call them) seem to compile python script to C code or C++. Apparently not all python features can be translated to C. As well, some of these appear to be experimental. Is it possible to compile my python scripts with my FORTRAN code? There exists f2py which converts FORTRAN code to python, but it doesn't work the other way around. Any help would be greatly appreciated. Thank you for your time. Vincent PS: I'm using python 2.6 on Ubuntu

    Read the article

  • BackgroundWorker From ASP.Net Application

    - by Kevin
    We have an ASP.Net application that provides administrators to work with and perform operations on large sets of records. For example, we have a "Polish Data" task that an administrator can perform to clean up data for a record (e.g. reformat phone numbers, social security numbers, etc.) When performed on a small number of records, the task completes relatively quickly. However, when a user performs the task on a larger set of records, the task may take several minutes or longer to complete. So, we want to implement these kinds of tasks using some kind of asynchronous pattern. For example, we want to be able to launch the task, and then use AJAX polling to provide a progress bar and status information. I have been looking into using the BackgroundWorker class, but I have read some things online that make me pause. I would love to get some additional advice on this. For example, I understand that the BackgroundWorker will actually use the thread pool from the current application. In my case, the application is an ASP.Net web site. I have read that this can be a problem because when the application recycles, the background workers will be terminated. Some of the jobs I mentioned above may take 3 minutes, but others may take a few hours. Also, we may have several hundred administrators all performing similar operations during the day. Will the ASP.Net application thread pool be able to handle all of these background jobs efficiently while still performing it's normal request processing? So, I am trying to determine if using the BackgroundWorker class and approach is right for our needs. Should I be looking at an alternative approach? Thanks and sorry for such a long post! Kevin

    Read the article

  • Catching MediaPlayer Exceptions from WPF MediaElement Control

    - by ScottCate
    I'm playing video in a MediaElement in WPF. It's working 1000's of times, over and over again. Once in a blue moon (like once a week), I get a windows exception (you know the dialog Dr. Watson Crash??) that happens. The MediaElment doesn't expose an error, it just crashes and sits there with an ugly Crash report on the screen. If you "view this report" you can see it is in fact MediaPlayer that has crashed. I know I can disable the crash reports from popping up - but I'm more interested in finding out what's going wrong. I'm not sure how to capture the results of the Dr. Watson capture, but I have the dialog open now if someone has advice on a better way to capture. Here is the opening line of data, that points to my application, then to wmvdecod.dll AppName: ScottApp.exe AppVer: 2.2009.2291.805 AppStamp:4a36c812 ModName: wmvdecod.dll ModVer: 11.0.5721.5145 ModStamp:453711a3 fDebug: 0 Offset: 000cbc88 And from the Win Event Log. (same information) Event Type: Error Event Source: .NET Runtime 2.0 Error Reporting Event Category: None Event ID: 1000 Date: 7/13/2009 Time: 10:20:27 AM User: N/A Computer:28022 Description: Faulting application ScottApp.exe, version 2.2009.2291.805, stamp 4a36c812, faulting module wmvdecod.dll, version 11.0.5721.5145, stamp 453711a3, debug? 0, fault address 0x000cbc88.

    Read the article

  • Subselecting with MDX

    - by Vince
    Greetings stack overflow community. I've recently started building an OLAP cube in SSAS2008 and have gotten stuck. I would be grateful if someone could at least point me towards the right direction. Situation: Two fact tables, same cube. FactCalls holds information about calls made by subscribers, FactTopups holds topup data. Both tables have numerous common dimensions one of them being the Subscriber dimension. FactCalls             FactTopups SubscriberKey      SubscriberKey CallDuration         DateKey CallCost               Topup Value ... What I am trying to achieve is to be able to build FactCalls reports based on distinct subscribers that have topped up their accounts within the last 7 days. What I am basically looking for an MDX equivalent to SQL's: select * from FactCalls where SubscriberKey in ( select distinct SubscriberKey from FactTopups where ... ); I've tried creating a degenerate dimension for both tables containing SubscriberKey and doing: Exist( [Calls Degenerate].[Subscriber Key].Children, [Topups Degenerate].[Subscriber Key].Children ) Without success. Kind regards, Vince

    Read the article

  • Looking for a smarter way to convert a Python list to a GList?

    - by Kingdom of Fish
    I'm really new to C - Python interaction and am currently writing a small app in C which will read a file (using Python to parse it) and then using the parsed information to execute small Python snippets. At the moment I'm feeling very much like I'm reinventing wheels, for example this function: typedef gpointer (list_func)(PyObject *obj); GList *pylist_to_glist(list_func func, PyObject *pylist) { GList *result = NULL; if (func == NULL) { fprintf(stderr, "No function definied for coverting PyObject.\n"); } else if (PyList_Check(pylist)) { PyObject *pIter = PyObject_GetIter(pylist); PyObject *pItem; while ((pItem = PyIter_Next(pIter))) { gpointer obj = func(pItem); if (obj != NULL) result = g_list_append(result, obj); else fprintf(stderr, "Could not convert PyObject to C object.\n"); Py_DECREF(pItem); } Py_DECREF(pIter); } return result; } I would really like to do this in a easier/smarter way less prone to memory leaks and errors. All comments and suggestions are appreciated.

    Read the article

  • Automate the signature of the update.rdf manifest for my firefox extension

    - by streetpc
    Hello, I'm developing a firefox extension and I'd like to provide automatic update to my beta-testers (who are not tech-savvy). Unfortunately, the update server doesn't provide HTTPS. According to the Extension Developer Guide on signing updates, I have to sign my update.rdf and provide an encoded public key in the install.rdf. There is the McCoy tool to do all of this, but it is an interactive GUI tool and I'd like to automate the extension packaging using an Ant script (as this is part of a much bigger process). I can't find a more precise description of what's happening to sign the update.rdf manifest than below, and McCoy source is an awful lot of javascript. The doc says: The add-on author creates a public/private RSA cryptographic key pair. The public part of the key is DER encoded and then base 64 encoded and added to the add-on's install.rdf as an updateKey entry. (...) Roughly speaking the update information is converted to a string, then hashed using a sha512 hashing algorithm and this hash is signed using the private key. The resultant data is DER encoded then base 64 encoded for inclusion in the update.rdf as an signature entry. I don't know well about DER encoding, but it seems like it needs some parameters. So would anyone know either the full algortihm to sign the update.rdf and install.rdf using a predefined keypair, or a scriptable alternative to McCoy whether a command-line tool like asn1coding will suffise a good/simple developer tutorial on DER encoding

    Read the article

  • DWM and painting unresponsive apps

    - by Doug Kavendek
    In Vista and later, if an app becomes unresponsive, the Desktop Window Manager is able to handle redrawing it when necessary (move a window over it, drag it around, etc.) because it has kept a pixel buffer for it. Windows also tries to detect when an app has become unresponsive after some timeout, and tries to make the best of the situation -- I believe it dims out the window, adds "Not Responding" to its title bar, and perhaps some other effects. Now, we have a skinned app that uses window regions and layered windows, and it doesn't play well with these effects. We've been developing on XP, but have noticed a strange effect when testing on Vista. At some points the app may spend a few moments on some calculation or callback, and if it passes the unresponsive threshold (I've read that it's a five second timeout, but I cannot find a link), a strange graphical problem occurs: any pixels that would be 100% transparent due to the window regions turn black, which effectively makes the window rectangular again, with a black background. There seem to be other anomalies, with the original window's pixels being shifted a bit in some child dialogs. I am working on reducing such delays (ideally Windows will never need to step in like this), and trying to maintain responsiveness while it's busy, but I'd still like to figure out what is causing it to render like that, as I can't guarantee I can eliminate all delays. Basically, I just would like to know what Windows is doing when this happens, and how I can make my app behave properly with it. Skinned apps have to still work on Vista and later, so I need to figure out what I'm doing that's non-standard. I don't even know exactly how to look for information into how Windows now handles unresponsive apps, as my searches only return people having issues with apps that are unresponsive, or very rudimentary explanations of what the DWM does with such apps. Heck I'm not even 100% sure it's the DWM responsible, but it seems likely. Any potential leads? Photo of problem; screen shots won't capture the effect (note that the white dialog's buffer is shifted -- it is shifted exactly by the distance it has been offset from the main (blue) window):

    Read the article

  • What programming languages do the top tier Universities teach?

    - by Simucal
    I'm constantly being inundated with articles and people talking about how most of today's Universities are nothing more than Java vocational schools churning out mediocre programmer after mediocre programmer. Our very own Joel Spolsky has his famous article, "The Perils of Java Schools." Similarly, Alan Kay, a famous Computer Scientist (and SO member) has said this in the past: "I fear — as far as I can tell — that most undergraduate degrees in computer science these days are basically Java vocational training." - Alan Kay (link) If the languages being taught by the schools are considered such a contributing factor to the quality of the school's program then I'm curious what languages do the "top-tier" computer science schools teach (MIT, Carnegie Mellon, Stanford, etc)? If the average school is performing so poorly due in large part the languages (or lack of) that they teach then what languages do the supposed "good" cs programs teach that differentiate them? If you can, provide the name of the school you attended, followed by a list of the languages they use throughout their coursework. Edit: Shog-9 asks why I don't get this information directly from the schools websites themselves. I would, but many schools websites don't discuss the languages they use in their class descriptions. Quite a few will say, "using high-level languages we will...", without elaborating on which languages they use. So, we should be able to get a pretty accurate list of languages taught at various well known institutions from the various SO members who have attended at them.

    Read the article

  • log4net from embedded xml?

    - by sanjeev40084
    i have two projects in visual studio. One is the console project while other is regular c# project. In the regular c# project, i have added config file(i.e. Test.config) with log4net section. This file is embedded. <configSections> <section name="log4net" type="log4net.Config.Log4NetConfigurationSectionHandler, log4net" /> </configSections> <log4net> <appender name="fileAppender" type="log4net.Appender.RollingFileAppender"> <file value="log//testapp.log" /> <appendToFile value="true" /> <rollingStyle value="Size" /> <maxSizeRollBackups value="10" /> <maximumFileSize value="100MB" /> <staticLogFileName value="true" /> <layout type="log4net.Layout.PatternLayout,log4net"> <param name="ConversionPattern" value="%d{ISO8601} [%t] [%-5p] %c - %m%n" /> </layout> </appender> <!-- Setup the root category, add the appenders and set the default priority --> <root> <priority value="ALL" /> <appender-ref ref="fileAppender" /> </root> </log4net> Now in my console project, i want to tell my log4net to load log4net information from (Test.config) which is in another project. This is what i did in the constructor of console project: Assembly asm = Assembly.GetExecutingAssembly(); Stream xmlStream = asm.GetManifestResourceStream("Northwind.Participant.Config.Test.config"); ILog log = LogManager.GetLogger(typeof(ConsoleStart)); 'Northwind.Participant is full namespace. Config - folder where Test.config file is situated. Does anyone know how i can do that?

    Read the article

  • Why does Java's invokevirtual need to resolve the called method's compile-time class?

    - by Chris
    Consider this simple Java class: class MyClass { public void bar(MyClass c) { c.foo(); } } I want to discuss what happens on the line c.foo(). At the bytecode level, the meat of c.foo() will be the invokevirtual opcode, and, according to the documentation for invokevirtual, more or less the following will happen: Look up the foo method defined in compile-time class MyClass. (This involves first resolving MyClass.) Do some checks, including: Verify that c is not an initialization method, and verify that calling MyClass.foo wouldn't violate any protected modifiers. Figure out which method to actually call. In particular, look up c's runtime type. If that type has foo(), call that method and return. If not, look up c's runtime type's superclass; if that type has foo, call that method and return. If not, look up c's runtime type's superclass's superclass; if that type has foo, call that method and return. Etc.. If no suitable method can be found, then error. Step #3 alone seems adequate for figuring out which method to call and verifying that said method has the correct argument/return types. So my question is why step #1 gets performed in the first place. Possible answers seem to be: You don't have enough information to perform step #3 until step #1 is complete. (This seems implausible at first glance, so please explain.) The linking or access modifier checks done in #1 and #2 are essential to prevent certain bad things from happening, and those checks must be performed based on the compile-time type, rather than the run-time type hierarchy. (Please explain.)

    Read the article

  • How to transfer objects through the header in WCF

    - by Michael
    I'm trying to transfer some user information in the header of the message through message inspectors. I have created a behavior which adds the inspector to the service (both client and server). But when I try to communicate with the service I get the following error: XmlException: Name cannot begin with the '<' character, hexadecimal value 0x3C. I have also get exception telling me that DataContracts where unexpected. Type 'System.DelegateSerializationHolder+DelegateEntry' with data contract name 'DelegateSerializationHolder.DelegateEntry:http://schemas.datacontract.org/2004/07/System' is not expected. Consider using a DataContractResolver or add any types not known statically to the list of known types - for example, by using the KnownTypeAttribute attribute or by adding them to the list of known types passed to DataContractSerializer. The thing is that my object contains other objects which are marked as DataContract and I'm not interested adding the KnownType attribute for those types. Another problem might be that my object to serialize is very restricted in form of internal class and internal properties etc. Can anyone guide me in the right direction. What I'm I doing wrong? Some code: public virtual object BeforeSendRequest(ref Message request, IClientChannel channel) { var header = MessageHeader.CreateHeader("<name>", "<namespace>", object); request.Headers.Add(header); return Guid.NewGuid(); }

    Read the article

  • Good resources for building web-app in Tapestry

    - by Rich
    Hi, I'm currently researching into Tapestry for my company and trying to decide if I think we can port our pre-existing proprietary web applications to something better. Currently we are running Tomcat and using JSP for our front end backed by our own framework that eventually uses JDBC to connect to an Oracle database. I've gone through the Tapestry tutorial, which was really neat and got me interested, but now I'm faced with what seems to be a common issue of documentation. There are a lot of things I'd need to be sure that I could accomplish with Tapestry before I'd be ready to commit fully to it. Does anyone have any good resources, be it a book or web article or anything else, that go into more detail beyond what the Tapestry tutorial explains? I am also considering integrating with Hibernate, and have read a little bit about Spring too. I'm still having a hard time understanding how Spring would be more useful than cumbersome in tandem with Tapestry,as they seem to have a lot of overlapping features. An example I read seemed to use Spring to interface with Hibernate, and then Tapestry to Spring, but I was under the impression Tapestry integrates to the same degree with Hibernate. The resource I'm speaking of is http://wiki.apache.org/tapestry/Tapstry5First_project_with_Tapestry5,_Spring_and_Hibernate . I was interested because I hadn't found information anywhere else on how to maintain user levels and sessions through a Tapestry application before, but wasn't exactly impressed by the need to use Spring in the example.

    Read the article

  • BizTalk SMTP Message Part Getting XML Encoding

    - by alram
    I have a email multi-part message which I am using to send failed message routing from the messagebox to a business users mailbox. Email{ Body - RawString; OriginalMessage - string}; The original message gets set from the received message that activates the orchestration. For example assume the original failed message is from a Flat file that failed disassembly with the contents: Order,1,2,3,4,5,<6>, I set the message using: Email.OriginalMessage = MyUtil.XlangMsgToStringMethod(FailedMessage);// XmlDocument type, this can be malformed xml, valid xml, or flat file that fails in disassembler. I can then write to the event log to test whats in Email.OriginalMessage: System.Diagnostics.EventLog.WriteEntry("BizTalk Server 2006", Email.OriginalMessage, Information); // This displays the correct original message "Order, 1,2,3,4,5,<6," When the email is delivered using a SMTP server and a dynamic send port, with the attachment set to text/plain mime type, the original message gets xml encoding escaped and wrapped in xml: <?xml version="1.0"?> <string>Order, 1,2,3,4,5,&lt;6&gt;,</string> Any ideas why? The SMTP port has passthrutransmit as pipeline. Thanks.

    Read the article

  • Why does calling abort() on ajax request cause error in ASP.Net MVC (IE8)

    - by user169867
    I use jquery to post to an MVC controller action that returns a table of information. The user of the page triggers this by clicking on various links. In the event the user decides to click a bunch of these links in quick succession I wanted to cancel any previous ajax request that may not have finished. I've found that when I do this (although its fine from the client's POV) I will get errors on the web application saying that "The parameters dictionary contains a null entry for parameter srtCol of non-nullable type 'System.Int32'" Now the ajax post deffinately passes in all the parameters, and if I don't try and cancel the ajax request it works just fine. But if I do cancel the request by calling abort() on the XMLHttpRequest object that ajax() returns before it finishes I get the error from ASP.Net MVC. Example: //Cancel any pevious request if (req) { req.abort(); req = null; } //Make new request req= $.ajax({ type: 'POST', url: "/Myapp/GetTbl", data: {srtCol: srt, view: viewID}, success: OnSuccess, error: OnError, dataType: "html" }); I've noticed this only happen is IE8. In FF it seems to not cuase a problem. Does anyone know how to cancel an ajax request in IE8 without causing errors for MVC? Thanks for any help.

    Read the article

  • How to use Java varargs with the GWT Javascript Native Interface? (aka, "GWT has no printf()")

    - by markerikson
    I'm trying to quickly learn GWT as part of a new project. I found out that GWT doesn't implement Java's String.format() function, so there's no printf()-like functionality. I knew that some printf() implementations exist for Javascript, so I figured I could paste one of those in as a GWT Javascript Native Interface function. I ran into problems, and decided I'd better make sure that the varargs values were being passed in correctly. That's where things got ugly. First, some example code: // From Java, call the JSNI function: test("sourceString", "params1", "params2", "params3"); .... public static native void test(Object... params) /*-{ // PROBLEM: this kills GWT! // alert(params.length); // returns "function" alert(typeof(params)); // returns "[Ljava.lang.Object;@b97ff1" alert(params); }-*/; The GWT docs state that "calling a varargs JavaScript method from Java will result in the callee receiving the arguments in an array". I figured that meant I could at least check params.length, but accessing that throws a JavascriptException wrapped in an UmbrellaException, with no real information. When I do "typeof(params)", it returns "function". As if that weren't odd enough, if I check the string value of params, it returns what appears to be a string version of a Java reference. So, I guess I'm asking a few different questions here: 1) How do GWT/JSNI varargs actually work, and do I need to do something special to pass in values? 2) What is actually going on here? 3) Is there any easier way to get printf()-style formatting in a GWT application?

    Read the article

  • Django deployment - can't import app.urls

    - by hora
    I just moved a django project to a deployment server from my dev server, and I'm having some issues deploying it. My apache config is as follows: <Location "/"> Order allow,deny Allow from all SetHandler python-program PythonHandler django.core.handlers.modpython SetEnv DJANGO_SETTINGS_MODULE project.settings PythonDebug On PythonPath "['/home/django/'] + sys.path" </Location> Django does work, since it renders the Django debug views, but I get the following error: ImportError at / No module named app.urls And here is all the information Django gives me: Request Method: GET Request URL: http://myserver.com/ Django Version: 1.1.1 Python Version: 2.6.5 Installed Applications: ['django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.sites', 'django.contrib.admin', 'django.contrib.admindocs', 'project.app'] Installed Middleware: ('django.middleware.common.CommonMiddleware', 'django.contrib.sessions.middleware.SessionMiddleware', 'django.contrib.auth.middleware.AuthenticationMiddleware') Traceback: File "/usr/lib64/python2.6/site-packages/django/core/handlers/base.py" in get_response 83. request.path_info) File "/usr/lib64/python2.6/site-packages/django/core/urlresolvers.py" in resolve 218. sub_match = pattern.resolve(new_path) File "/usr/lib64/python2.6/site-packages/django/core/urlresolvers.py" in resolve 216. for pattern in self.url_patterns: File "/usr/lib64/python2.6/site-packages/django/core/urlresolvers.py" in _get_url_patterns 245. patterns = getattr(self.urlconf_module, "urlpatterns", self.urlconf_module) File "/usr/lib64/python2.6/site-packages/django/core/urlresolvers.py" in _get_urlconf_module 240. self._urlconf_module = import_module(self.urlconf_name) File "/usr/lib64/python2.6/site-packages/django/utils/importlib.py" in import_module 35. __import__(name) Exception Type: ImportError at / Exception Value: No module named app.urls Any ideas as to why I get an import error?

    Read the article

  • How should I go about implementing a points-to analysis in Maude?

    - by reprogrammer
    I'm going to implement a points-to analysis algorithm. I'd like to implement this analysis mainly based on the algorithm by Whaley and Lam. Whaley and Lam use a BDD based implementation of Datalog to represent and compute the points-to analysis relations. The following lists some of the relations that are used in a typical points-to analysis. Note that D(w, z) :- A(w, x),B(x, y), C(y, z) means D(w, z) is true if A(w, x), B(x, y), and C(y, z) are all true. BDD is the data structure used to represent these relations. Relations input vP0 (variable : V, heap : H) input store (base : V, field : F, source : V) input load (base : V, field : F, dest : V) input assign (dest : V, source : V) output vP (variable : V, heap : H) output hP (base : H, field : F, target : H) Rules vP(v, h) :- vP0(v, h) vP(v1, h) :- assign(v1, v2), vP(v2, h) hP(h1, f,h2) :- store(v1, f, v2), vP(v1, h1), vP(v2, h2) vP(v2, h2) :- load(v1, f, v2), vP(v1, h1), hP(h1, f, h2) I need to understand if Maude is a good environment for implementing points-to analysis. I noticed that Maude uses a BDD library called BuDDy. But, it looks like that Maude uses BDDs for a different purpose, i.e. unification. So, I thought I might be able to use Maude instead of a Datalog engine to compute the relations of my points-to analysis. I assume Maude propagates independent information concurrently. And this concurrency could potentially make my points-to analysis faster than sequential processing of rules. But, I don't know the best way to represent my relations in Maude. Should I implement BDD in Maude myself, or Maude's internal unification based on BDD has the same effect?

    Read the article

  • MS-Access: What could cause one form with a join query to load right and another not?

    - by Daniel Straight
    Form1 Form1 is bound to Table1. Table1 has an ID field. Form2 Form2 is bound to Table2 joined to Table1 on Table2.Table1_ID=Table1.ID Here is the SQL (generated by Access): SELECT Table2.*, Table1.[FirstFieldINeed], Table1.[SecondFieldINeed], Table1.[ThirdFieldINeed] FROM Table1 INNER JOIN Table2 ON Table1.ID = Table2.[Table1_ID]; Form2 is opened with this code in Form1: DoCmd.RunCommand acCmdSaveRecord DoCmd.OpenForm "Form2", , , , acFormAdd, , Me.[ID] DoCmd.Close acForm, "Form1", acSaveYes And when loaded runs: Me.[Table1_ID] = Me.OpenArgs When Form2 is loaded, fields bound to columns from Table1 show up correctly. Form3 Form3 is bound to Table3 joined to Table2 on Table3.Table2_ID=Table2.ID Here is the SQL (generated by Access): SELECT Table3.*, Table2.[FirstFieldINeed], Table2.[SecondFieldINeed] FROM Table2 INNER JOIN Table3 ON Table2.ID = Table3.[Table2_ID]; Form3 is opened with this code in Form2: DoCmd.RunCommand acCmdSaveRecord DoCmd.OpenForm "Form3", , , , acFormAdd, , Me.[ID] DoCmd.Close acForm, "Form2", acSaveYes And when loaded runs: Me.[Table2_ID] = Me.OpenArgs When Form3 is loaded, fields bound to columns from Table2 do not show up correctly. WHY? UPDATES I tried making the join query into a separate query and using that as my record source, but it made no difference at all. If I go to the query for Form3 and view it in datasheet view, I can see that the information that should be pulled into the form is there. It just isn't showing up on the form.

    Read the article

  • What is the PIXELFORMATDESCRIPTOR parameter in SetPixelFormat() used for?

    - by Mads Elvheim
    Usually when setting up OpenGL contexts, I've simply filled out a PIXELFORMATDESCRIPTOR structure with the necessary information and called ChoosePixelFormat(), followed by a call to SetPixelFormat() with the returned matching pixelformat from ChoosePixelFormat(). Then I've simply passed the initial descriptor without giving much thought of why. But now I use wglChoosePixelFormatARB() instead if ChoosePixelFormat() because I need some extended traits like sRGB and multisampling. It takes an attribute list of integers, just like XLib/GLX on Linux, not a PIXELFORMATDESCRIPTOR structure. So, do I really have to fill in a descriptor for SetPixelFormat() to use? What does SetPixelFormat() use the descriptor for when it already has the pixelformat descriptor index? Why do I have to specify the same pixelformat attributes in two different places? And which one takes precedence; the attribute list to wglChoosePixelFormatARB(), or the PIXELFORMATDESCRIPTOR attributes passed to SetPixelFormat()? Here are the function prototypes, to make the question more clear: /* Finds a best match based on a PIXELFORMATDESCRIPTOR, and returns the pixelformat index */ int ChoosePixelFormat(HDC hdc, const PIXELFORMATDESCRIPTOR *ppfd); /* Finds a best match based on an attribute list of integers and floats, and returns a list of indices of matches, with the best matches at the head. Also supports extended pixelformat traits like sRGB color space, floating-point framebuffers and multisampling. */ BOOL wglChoosePixelFormatARB(HDC hdc, const int *piAttribIList, const FLOAT *pfAttribFList, UINT nMaxFormats, int *piFormats, UINT *nNumFormats ); /* Sets the pixelformat based on the pixelformat index */ BOOL SetPixelFormat(HDC hdc, int iPixelFormat, const PIXELFORMATDESCRIPTOR *ppfd);

    Read the article

  • Crystal Reports Failed Database Login

    - by Marlon
    Hello, After spending a good 3 to 4 hours on google trying to find any solution to my problem I haven't had much luck. Basically, we use crystal reports for our .NET applications with a sql server back end, we have many clients each with their own server and so our reports need to have their connections dynamically set. Up until a week ago this worked fine. However a few days ago a client reported they were getting a database login prompt for a report (only one report, the rest worked fine). We were quite stumped but we managed to reproduce it on a netbook which didn't have visual studio or sql server installed. In the end the dev decided to reproduce the report in the hope it was just an oddity in that particular report. Unfortunately a new client today also experienced the same problem, but this time for every crystal report they had - and also they worked on the netbook, so we are really quite lost here. Below is a screenshot of what our clients get presented with - and here is the code that I use to set the connection information in the report cI.ServerName = (string)builder["Data Source"]; cI.DatabaseName = (string)builder["Initial Catalog"]; cI.UserID = (string)builder["User ID"]; cI.Password = (string)builder["Password"]; foreach (IConnectionInfo info in cryRpt.DataSourceConnections) { info.SetConnection(cI.ServerName, cI.DatabaseName, cI.UserID, cI.Password); } foreach (ReportDocument sub in cryRpt.Subreports) { foreach (IConnectionInfo info in sub.DataSourceConnections) { info.SetConnection(cI.ServerName, cI.DatabaseName, cI.UserID, cI.Password); } } As always, any help much appreciated.

    Read the article

< Previous Page | 798 799 800 801 802 803 804 805 806 807 808 809  | Next Page >