Search Results

Search found 27142 results on 1086 pages for 'control structure'.

Page 942/1086 | < Previous Page | 938 939 940 941 942 943 944 945 946 947 948 949  | Next Page >

  • How to group strings by prefix

    - by namenlos
    I am writing a Winform UI in which the user must select a single customer. (For reasons beyond my control I am limited to a UI that uses dropdown lists, text fields, checkboxes, radiobuttons only -i.e. no fancy special UI controls) The situation There are a lot of customers (a thousand for example) If i put all the customers in a single dropdown there's no way it will be easy for a customer to even see all the customers. Also the it will take too long to retireve all the customers from the DB to populate the dropdown My thought is to have two combo box, the first lists groups of the customers by their last name something like a phone book "Aa-Ac", "Ad-Ade", "Adf-B", when selecting the first combo box, it scope the second one to a managable set customer names (no more than for example 40 names) The question I need a reasonable way of grouping their names such that it will be clear to customer which group contains the name. I.e. given a group of names I need to bucketize then int "Aa-Ac". Comments I don't need to solve the general problem of an immense number of names - we know based on our data that 1000 names is the max our users will encounter. If there are other techniques please do share, but I am interested specifically in an answer to my specific question around how to determine the buckets ("Aa-Ac", etc.)

    Read the article

  • ACL architechture for a Software As a service in Spring 3.0

    - by geoaxis
    I am making a software as a service using Spring 3.0 (Spring MVC, Spring Security, Spring Roo, Hibernate) I have to come up with a flexible access control list mechanism.I have three different kinds of users System (who can do any thing to the system, includes admin and internal daemons) Operations (who can add and delete users, organizations, and do maintenance work on behalf of users and organizations) End Users (they belong to one or more organization, for each organization, the user can have one or more roles, like being organization admin, or organization read-only member) (role like orgadmin can also add users for that organization) Now my question is, how should i model the entity of User? If I just take the End User, it can belong to one or more organizations, so each user can contain a set of references to its organizations. But how do we model the users role for each organization, So for example User UX belongs to organizations og1, og2 and og3, and for og1 he is both orgadmin, and org-read-only-user, where as for og2 he is only orgadmin and for og3 he is only org-read-only-user I have the possibility of making each user belong to one organization alone, but that's making the system bounded and I don't like that idea (although i would still satisfy the requirement) If you have a better extensible ACL architecture, please suggest it. Since its a software as a service, one would expect that alot of different organizations would be part if the same system. I had one concern that it is not a good idea to keep og1 and og2 data on the same DB (if og1 decides to spawn a 100 reports on the system, og2 should not suffer) But that is some thing advanced for now and is not directly related to ACL but to the physical distribution of data and setup of services based on those ACLs This is a community Wiki question, please correct any thing which you wish to do so. Thanks

    Read the article

  • Autoloading Development or Production configs (best practices)

    - by Xeoncross
    When programming sites you usually have one set of config files for the development environment and another set for the production server (or one file with both settings). I am assuming all projects should be handled by version control like git or svn. Manual file transfers (like FTP) is wrong on so many levels. How you enable/disable the correct settings (so that your system knows which ones to use) is a problem for me. Each system I work on just kind of jimmy-rigs a solution. Below are the 3 methods I know of and I am hoping that someone can submit a more elegant solutions. 1) File Based The system loads a folder structure based on the URL requested. /site.com /site.fakeTLD /lib index.php For example, if the url is http://site.com then the system loads the production config files located in the site.com folder. However, if I'm working on the site locally I visit http://site.fakeTLD to work on the local copy of the site. To setup this I edit my hosts file and add site.fakeTLD to point to my own computer (127.0.0.1/localhost) and then create a vhost in apache. So now I can work on the codebase locally and then push to the server without any trouble. The problem is that this is susceptible to a "host" injection attack. So someone loading site.com could set the host to site.fakeTLD and then the system would load my development config files instead of production. 2) Config Based The config files contain on section for development - and one for production. The problem is that each time you go to push your changes to the repo you have to edit the file to specify which set of config options should be used. $use = 'production'; //'development'; This leaves the repo open to human error should one of the developers forget to enable the right setting. 3) File System Check Based All the development machines have an extra empty file called "development.txt" or something. Each time the system loads it checks for this file - if found then it knows it is in development mode - if missing then it knows it is in production mode. Since the file is NEVER ADDED to the repo then it will never be pushed (and checked out) on the production machine. However, this just doesn't feel right and causes a slight slow down since all filesystem checks are slow. Is there anyway that the server can auto-detect wither to use the development or production configs?

    Read the article

  • IIS7 Web.Config Custom Errors

    - by Michael
    Using GoDaddy to host my site (I know that's my first problem)! :-) Trying to setup customer error messages for my site. GoDaddy allows you to setup a 404 in their control panel, but I can't override this, or setup any additional error redirects, specifically a 500-server error. Here is my web.config file: <configuration> <system.webServer> <rewrite> <rules> <rule name="Redirect to WWW" stopProcessing="true"> <match url=".*" /> <conditions> <add input="{HTTP_HOST}" pattern="^mysite.com$" /> </conditions> <action type="Redirect" url="http://www.mysite.com/{R:0}" redirectType="Permanent" /> </rule> </rules> </rewrite> </system.webServer> <system.web> <customErrors mode="On" defaultRedirect="http://www.mysite.com/oops.php"> <error statusCode="404" redirect="http://www.mysite.com/oops.php?error=404" /> <error statusCode="500" redirect="http://www.mysite.com/oops.php?error=500" /> </customErrors> </system.web> </configuration>

    Read the article

  • WPF MVVM Chart change axes

    - by c0uchm0nster
    I'm new to WPF and MVVM. I'm struggling to determine the best way to change the view of a chart. That is, initially a chart might have the axes: X - ID, Y - Length, and then after the user changes the view (either via lisbox, radiobutton, etc) the chart would display the information: X - Length, Y - ID, and after a third change by the user it might display new content: X - ID, Y - Quality. My initial thought was that the best way to do this would be to change the bindings themselves. But I don't know how tell a control in XAML to bind using a Binding object in the ViewModel, or whether it's safe to change that binding in runtime? Then I thought maybe I could just have a generic Model that has members X and Y and populate them as needed in the viewmodel? My last thought was that I could have 3 different chart controls and just hide and show them as appropriate. What is the CORRECT/SUGGESTED way to do this in the MVVM pattern? Any code examples would be greatly appreciated. Thanks

    Read the article

  • Qt, no such slot error

    - by Martin Beckett
    I'm having a strange problem with slots in Qt4.6 I have a tree control that I am trying to fire an event back to the mainwindow to edit something. I have boiled it down to this minimal example: class MainWindow : public QMainWindow { Q_OBJECT public: MainWindow(int argc, char **argv ); private Q_SLOTS: void about() {QMessageBox::about(this, "about","about"); } // Just the defaul about box void doEditFace() {QMessageBox::about(this, "test","doEditFace"); } // pretty identical .... } class TreeModel : public QAbstractItemModel { Q_OBJECT Q_SIGNALS: void editFace(); ... } In my Mainwindow() I have connect(treeModel, SIGNAL(editFace()), this, SLOT(about())); // returns true connect(treeModel, SIGNAL(editFace()), this, SLOT(doEditFace())); // returns false When I run it the second line gives a warning. Object::connect: No such slot MainWindow::doEditFace() in \src\mainwindow.cpp:588 As far as I can see the doEditFace() is in the moc_ qmeta class perfectly correctly. And the edit event fires and pops up the about box. The order doesn't matterm if I connect the about box second it still works but my slot doesnt! vS2008 windowsXP qt4.6.2

    Read the article

  • Factories, or Dependency Injection for object instantiation in WCF, when coding against an interface

    - by Saajid Ismail
    Hi I am writing a client/server application, where the client is a Windows Forms app, and the server is a WCF service hosted in a Windows Service. Note that I control both sides of the application. I am trying to implement the practice of coding against an interface: i.e. I have a Shared assembly which is referenced by the client application. This project contains my WCF ServiceContracts and interfaces which will be exposed to clients. I am trying to only expose interfaces to the clients, so that they are only dependant on a contract, not any specific implementation. One of the reasons for doing this is so that I can have my service implementation, and domain change at any time without having to recompile and redeploy the clients. The interfaces/contracts will in this case not change. I only need to recompile and redeploy my WCF service. The design issue I am facing now, is: on the client, how do I create new instances of objects, e.g. ICustomer, if the client doesn't know about the Customer concrete implementation? I need to create a new customer to be saved to the DB. Do I use dependency injection, or a Factory class to instantiate new objects, or should I just allow the client to create new instances of concrete implementations? I am not doing TDD, and I will typically only have one implementation of ICustomer or any other exposed interface.

    Read the article

  • Why ComboBox hides cursor when DroppedDown is set?

    - by Ivan Danilov
    Let's create WinForms Application (I have Visual Studio 2008 running on Windows Vista, but it seems that described situation takes place almost everywhere from Win98 to Vista, on native or managed code). Write such code: using System; using System.Drawing; using System.Windows.Forms; namespace WindowsFormsApplication1 { public class Form1 : Form { private readonly Button button1 = new Button(); private readonly ComboBox comboBox1 = new ComboBox(); private readonly TextBox textBox1 = new TextBox(); public Form1() { SuspendLayout(); textBox1.Location = new Point(21, 51); button1.Location = new Point(146, 49); button1.Text = "button1"; button1.Click += button1_Click; comboBox1.Items.AddRange(new[] {"1", "2", "3", "4", "5", "6"}); comboBox1.Location = new Point(21, 93); AcceptButton = button1; Controls.AddRange(new Control[] {textBox1, comboBox1, button1}); Text = "Form1"; ResumeLayout(false); PerformLayout(); } private void button1_Click(object sender, EventArgs e) { comboBox1.DroppedDown = true; } } } Then, run app. Place mouse cursor on the form and don't touch mouse anymore. Start to type something in TextBox - cursor will hide because of it. When you press Enter key - event throws and ComboBox will be dropped down. But now cursor won't appear even if you move it! And appears only when you click somewhere. There I've found discussion of this problem. But there's no good solution... Any thoughts? :)

    Read the article

  • css positioning

    - by bsreekanth
    Hello, I have uploaded a part of my screen below (link: http://yfrog.com/0d30127380p) It is part of a forum, so there are elements above and below it. The "Response Req.. Date" has a label, a date picker, and two drop down select control for time. I tried setting the width of the datepicker element, and a right margin so that the time selectors would position next to it. But it always sit below it. I'm not good in css positioning, so any suggestion would be highly appreciated. <div class="wrapper "> <label for="responseRequiredDate"> Response Required Date <span class="indicator">*</span> </label> <input type="hidden" name="responseRequiredDate" value="struct" /><div class="datetimepicker"> <div class="datePicker"> </div> <script> ...</script> <div class="timepicker"><select .... </div> </div> the date picker insert a script tag, would that cause a problem. probably not.

    Read the article

  • How to set "Run this program as an administrator" programatically.

    - by Patrick
    I'm having a problem with good ol' bdeadmin.exe in Vista. First, let's get the predictable responses out of the way: "You should not require your application to be elevated." This one does. C'est la vie. "You need to embed a manifest file." It is already compiled, it is many years old, the company that created it has no intention of doing it again, and it is installed from a Merge Module (MSM file). "BDE is obsolete, you should be using dbExpress" One and a half million lines of code. 'Nuff said. "Drop a manifest file next to the EXE." Tried that, did nothing. As a test, that same manifest file was able to make several other EXE files require elevation, just not the one I wanted. Something in there is preventing the external manifest from being read. "Create a shortcut and set SLDF_RUNAS_USER." Can't do that, it's a Control Panel applet. The only thing that worked was setting "Run this program as an administrator" under the Compatibility tab of its Properties window. I shouldn't have to tell users to do this. Bad for business. I need to have the installer do this. The MSM file uses a static path. Any ideas?

    Read the article

  • Need help receiving ByteArray data

    - by k80sg
    Hi folks, I am trying to receive byte array data from a machine. It sends out 3 different types of data structure each with different number of fields which consist of mostly int and a few floats, and byte sizes, the first being 320 bytes, 420 for the second type and 560 for the third. When the sending program is launched, it fires all 3 types of data simultaneouly with an interval of 1 sec. Example: Sending order: Pack1 - 320 bytes 1 sec later Pack2 - 420 bytes 1 sec later Pack3 - 560 bytes 1 sec later Pack1 - 320 bytes ... .. . How do I check the incoming byte size before passing it to: byte[] handsize = new byte[bytesize]; as the data I receive are all out of order, for instance using the following the read all int: System.out.println("Reading data in int format:" + " " + datainput.readInt()); I get many different sets of values whenever I run my prog although with some valid field data but they are all over the place. I am not too sure how exactly should I do it but I tried the following and apparently my data fields are not receiving in correct sequence: BufferedInputStream bais = new BufferedInputStream(requestSocket.getInputStream()); DataInputStream datainput = new DataInputStream(bais); byte[] handsize = new byte[560]; datainput.readFully(handsize); int n = 0; int intByte[] = new int[140]; for (int i = 0; i < 140 ; i++) { System.out.println("Reading data in int format:" + " " + datainput.readInt()); intByte[n] = datainput.readInt(); n = n + 1; System.out.println("The value in array is:" + intByte[0]); System.out.println("The value in array is:" + intByte[1]); System.out.println("The value in array is:" + intByte[2]); System.out.println("The value in array is:" + intByte[3]); Also from the above code, the order of the values printed out with System.out.println("Reading data in int format:" + " " + datainput.readInt()); and System.out.println("The value in array is:" + intByte[0]); System.out.println("The value in array is:" + intByte[1]); are different. Any help will be appreciated. Thanks

    Read the article

  • Request for the permission of type 'System.Data.Odbc.OdbcPermission.. help needed

    - by Matt
    I'm getting the following error when trying to connect to a remote mysql server. Request for the permission of type 'System.Data.Odbc.OdbcPermission, System.Data, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089' failed. I've installed the odbc 5.1 driver, and can connect to the database using the Data Sources (ODBC) tool in Control Panel. However when I try and run my C# scrip to connect, I get the above error. I've read its something to do with trust levels or something? I don't quite understand what people were talking about though. I went to C:... Framework/v2.0.50727/CONFIG and added to the medium and high trust.config files, but that didn't help.. Can someone help me out here please? My connection string is MyConString = "DRIVER={MySQL ODBC 5.1 Driver};" + "SERVER=" + m_strHost + ";" + "PORT=3306;" + "DATABASE=" + m_strDatabase + ";" + "UID=" + m_strUserName + ";" + "PWD=" + m_strPassword + ";" + "OPTION=3;";

    Read the article

  • VS2010 / Target Framework = 3.5 / Building on Continuous Integration Server

    - by granadaCoder
    I'm checking into upgrading to VS2010. Our production servers only have 3.5 Framework and it will be 6-9 months before they are updated. We also have a Continuous Integration Server, running CruiseControl.NET (CC.NET). It has the 3.5 Framework on it as well. Our implementation of CC.NET mainly calls msbuild.exe MySolution.msbuild. (We encapsulate most of the build logic into .msbuild files fyi) Inside the .msbuild file, the following is the "Build" syntax: < Target Name="Build" DependsOnTargets="Checkout" < MSBuild Projects="$(WorkingCheckout)\MySolution.sln" Targets="Build" Properties="Configuration=$(Configuration)" < Output TaskParameter="TargetOutputs" ItemName="TargetOutputsItemName"< /Output < /MSBuild < /Target (A few spaces added to make it display here) =========== I know the VS2010 can "Target" the 3.5 Framework. My question is what happens when I have a VS2010 dev machine, and I check the VS2010 .sln and .csproj(s) files into source control (svn, btw).....will the CC.NET machine ~~which only have the 3.5 Framework installed on it........be able to build the .sln ? I guess I could test it, but the catch22 is that I don't have VS2010 (yet). So I'm asking before I try (the trial or a real install. ............. Any ideas what will happen? I guess the crux question is, what will happen. c:\WINDOWS\Microsoft.NET\Framework\v3.5\MSBuild.exe "MyVS2010SolutionFile.sln" ?? My hopeful goal would be, allow the developers to have VS2010 (now!), and it still be "ok" for the CC.NET machine and the Production Servers which will only have the 3.5 Framework on them for the foreseeable future. Just to be clear, developers NEVER create deployable builds. Only the CC.NET machine produces builds that will be pushed as production builds. Any help?

    Read the article

  • Why are there connections open to my databases?

    - by Everett
    I have a program that stores user projects as databases. Naturally, the program should allow the user to create and delete the databases as they need to. When the program boots up, it looks for all the databases in a specific SQLServer instance that have the structure the program is expecting. These database are then loaded into a listbox so the user can pick one to open as a project to work on. When I try to delete a database from the program, I always get an SQL error saying that the database is currently open and the operation fails. I've determined that the code that checks for the databases to load is causing the problem. I'm not sure why though, because I'm quite sure that all the connections are being properly closed. Here are all the relevant functions. After calling BuildProjectList, running "DROP DATABASE database_name" from ExecuteSQL fails with the message: "Cannot drop database because it is currently in use". I'm using SQLServer 2005. private SqlConnection databaseConnection; private string connectionString; private ArrayList databases; public ArrayList BuildProjectList() { //databases is an ArrayList of all the databases in an instance if (databases.Count <= 0) { return null; } ArrayList databaseNames = new ArrayList(); for (int i = 0; i < databases.Count; i++) { string db = databases[i].ToString(); connectionString = "Server=localhost\\SQLExpress;Trusted_Connection=True;Database=" + db + ";"; //Check if the database has the table required for the project string sql = "select * from TableExpectedToExist"; if (ExecuteSQL(sql)) { databaseNames.Add(db); } } return databaseNames; } private bool ExecuteSQL(string sql) { bool success = false; openConnection(); SqlCommand cmd = new SqlCommand(sql, databaseConnection); try { cmd.ExecuteNonQuery(); success = true; } catch (SqlException ae) { MessageBox.Show(ae.Message.ToString()); } closeConnection(); return success; } public void openConnection() { databaseConnection = new SqlConnection(connectionString); try { databaseConnection.Open(); } catch(Exception e) { MessageBox.Show(e.ToString(), "Error", MessageBoxButtons.OK, MessageBoxIcon.Error); } } public void closeConnection() { if (databaseConnection != null) { try { databaseConnection.Close(); } catch (Exception e) { MessageBox.Show(e.ToString(), "Error", MessageBoxButtons.OK, MessageBoxIcon.Error); } } }

    Read the article

  • Qt Serial Port Errors - Data not getting read

    - by user2970546
    I'm trying to read a serial port with the Qt SerialPort library. I can read the data using HyperTerminal. In Qt I used the following code to try and do the same thing. Qt says the the port has been opened correctly, but for some reason, the bytesAvailable from the serial port is always 0. serial.setPortName("COM20"); if (serial.open(QIODevice::ReadOnly)) qDebug() << "Opened port " << endl; else qDebug() << "Unable to open port" << endl; serial.setDataBits(QSerialPort::Data8); serial.setParity(QSerialPort::EvenParity); serial.setBaudRate(QSerialPort::Baud115200); qDebug() << "Is open?? " << serial.isOpen(); // Wait unit serial port data is ready while (!serial.bytesAvailable()) { //qDebug() << serial.bytesAvailable()<<endl; continue; } QByteArray data = serial.read(100); qDebug() << "This is the data -" << data << endl; serial.close(); In comparison, MATLAB code with the same structure as the above code, successfully manages to read the serial port data %Serial Port Grapher - Shurjo Banerjee s = serial('COM20'); s.BaudRate = 460800; s.Parity = 'even'; try input('Ready to begin?'); catch end fopen(s); fh = figure(); hold on; t = 1; while (s.BytesAvailable <= 0) continue end a = fread(s, 1) old_t = 1; old_a = a; while true if (s.BytesAvailable > 0) a = fread(s, 1) figure(fh) t = t + 1; plot([old_t t], [old_a a]); old_t = t; old_a = a; end end fclose(s);

    Read the article

  • ASP.NET Content Web Form - content from placeholder disappears

    - by Naeem Sarfraz
    I'm attempting to set a class on the body tag in my asp.net site which uses a master page and content web forms. I simply want to be able to do this by adding a bodycssclass property (see below) to the content web form page directive. It works through the solution below but when i attempt to view Default.aspx the Content1 control loses its content. Any ideas why? Here is how I'm doing it. I have a master page with the following content: <%@ Master Language="C#" ... %> <html><head>...</head> <body id=ctlBody runat=server> <asp:ContentPlaceHolder ID="cphMain" runat="server" /> </body> </html> it's code behind looks like: public partial class Site : MasterPageBase { public override string BodyCssClass { get { return ctlBody.Attributes["class"]; } set { ctlBody.Attributes["class"] = value; } } } it inherits from: public abstract class MasterPageBase : MasterPage { public abstract string BodyCssClass { get; set; } } my default.aspx is defined as: <%@ Page Title="..." [master page definition etc..] bodycssclass="home" %> <asp:Content ID="Content1" ContentPlaceHolderID="cphMain" runat="server"> Some content </asp:Content> the code behind for this file looks like: public partial class Default : PageBase { ... } and it inherits from : public class PageBase : Page { public string BodyCssClass { get { MasterPageBase mpbCurrent = this.Master as MasterPageBase; return mpbCurrent.BodyCssClass; } set { MasterPageBase mpbCurrent = this.Master as MasterPageBase; mpbCurrent.BodyCssClass = value; } } }

    Read the article

  • Rational Application Developer (RAD) 7.5+ and websphere runtime will not pick up jars from projects

    - by Berlin Brown
    With RAD Version: 7.5.3, Java 1.5. I have a couple of different projects. I needed to break out the java code and turn the *.class files into a jar. So basically, same *.class files I just removed the code and then jarred the class files into a jar. I broke the classes into a jar and then included the jar in the project. And I also did an order/export on the jar so that other projects can see the jar. At this point, ideally my project should not have changed because I am using class files in a jar instead of the java code. When I visit my web application in websphere, I get class not found errors on the classes that are now in the jar. Project Structure: A. Project earApp -- will need the webapp B. Project webapp -- will need the project (no jar files or *.java files are found in this project) C. Project javasrc -- the java source and the NEW JAR file are found here. I don't think websphere is acknowledging the jar. Here is the error: java.lang.NoClassDefFoundError: com.MyApp at java.lang.ClassLoader.defineClassImpl(Native Method) at java.lang.ClassLoader.defineClass(ClassLoader.java:258) at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:151) at com.ibm.ws.classloader.CompoundClassLoader._defineClass(CompoundClassLoader.java:675) at com.ibm.ws.classloader.CompoundClassLoader.findClass(CompoundClassLoader.java:614) at com.ibm.ws.classloader.CompoundClassLoader.loadClass(CompoundClassLoader.java:431) at java.lang.ClassLoader.loadClass(ClassLoader.java:597) at java.lang.Class.getDeclaredMethodsImpl(Native Method) at java.lang.Class.getDeclaredMethods(Class.java:664) at com.ibm.ws.webcontainer.annotation.data.ScannedAnnotationData.collectMethodAnnotations(ScannedAnnotationData.java:130) at com.ibm.ws.webcontainer.annotation.data.ScannedAnnotationData.<init>(ScannedAnnotationData.java:47) at com.ibm.ws.webcontainer.annotation.AnnotationScanner.scanClass(AnnotationScanner.java:61) at com.ibm.ws.wswebcontainer.webapp.WebApp.processRuntimeAnnotationHelpers(WebApp.java:711) at com.ibm.ws.wswebcontainer.webapp.WebApp.populateJavaNameSpace(WebApp.java:624) at com.ibm.ws.wswebcontainer.webapp.WebApp.initialize(WebApp.java:289) at com.ibm.ws.wswebcontainer.webapp.WebGroup.addWebApplication(WebGroup.java:93) at com.ibm.ws.wswebcontainer.VirtualHost.addWebApplication(VirtualHost.java:162) at com.ibm.ws.wswebcontainer.WebContainer.addWebApp(WebContainer.java:671) at com.ibm.ws.wswebcontainer.WebContainer.addWebApplication(WebContainer.java:624) at com.ibm.ws.webcontainer.component.WebContainerImpl.install(WebContainerImpl.java:395) at com.ibm.ws.webcontainer.component.WebContainerImpl.start(WebContainerImpl.java:611) at com.ibm.ws.runtime.component.ApplicationMgrImpl.start(ApplicationMgrImpl.java:1274) at com.ibm.ws.runtime.component.DeployedApplicationImpl.fireDeployedObjectStart(DeployedApplicationImpl.java:1165) at com.ibm.ws.runtime.component.DeployedModuleImpl.start(DeployedModuleImpl.java:587) at com.ibm.ws.runtime.component.DeployedApplicationImpl.start(DeployedApplicationImpl.java:832) at com.ibm.ws.runtime.component.ApplicationMgrImpl.startApplication(ApplicationMgrImpl.java:921) at com.ibm.ws.runtime.component.ApplicationMgrImpl$AppInitializer.run(ApplicationMgrImpl.java:2124) at com.ibm.wsspi.runtime.component.WsComponentImpl$_AsynchInitializer.run(WsComponentImpl.java:342) at com.ibm.ws.util.ThreadPool$Worker.run(ThreadPool.java:1497) What do you think I need to do?

    Read the article

  • How to set up RPX widget and facebook app to be able to authenticate with rpx_now?

    - by Andrei
    Using the sample app for rpx_now gem ( http://github.com/grosser/rpx_now_example) on localhost:3000, I have successfully logged in via Google Accounts, myOpenID, Yahoo, but cannot make it via Facebook. In the RPX app/widget settings I have set my facebook-app key and secret. In my facebook app settings, the Connect URL is myappname.rpxnow.com. But when I try to connect, then I don't even see a facebook login page, just a number of redirects and I am back to my localhost with the following exception http://gist.github.com/386520 . Before I was successfully connecting with oauth2 gem, however, without fetching user data - only authentication. That time I set only key/secret and localhost as my Connect URL. Currently, I don't even ask for email etc., but still the same problem. Can it happen because rpx_now cannot get requested user data from facebook? Or it is a problem of facebook key/secret? May be I need to provide more settings of my facebook app? RPXNow::ApiError in UsersController#create Got error: Invalid parameter: token (code: 1), HTTP status: 200 RAILS_ROOT: /home/Andrei/rpx_now_example Application Trace | Framework Trace | Full Trace /usr/lib/ruby/gems/1.8/gems/rpx_now-0.6.20/lib/rpx_now/api.rb:71:in `parse_response' /usr/lib/ruby/gems/1.8/gems/rpx_now-0.6.20/lib/rpx_now/api.rb:21:in `call' /usr/lib/ruby/gems/1.8/gems/rpx_now-0.6.20/lib/rpx_now.rb:23:in `user_data' /home/Andrei/rpx_now_example/app/controllers/users_controller.rb:16:in `create' Request Parameters: None Show session dump Response Headers: {"Content-Type"="", "Cache-Control"="no-cache"}

    Read the article

  • Tools for Maintaining Branches in SVN

    - by Chris Conway
    My team uses SVN for source control. Recently, I've been working on a branch with occasional merges from the trunk and it's been a fairly annoying experience (cf. Joel Spolsky's "Subversion Story #1"), so I've been looking alternative ways to manage branches and merging. Given that a centralized SVN repository is non-negotiable, what I'd like is a set of tools that satisfy the following conditions. Complete revision history should be stored in SVN for both trunk and branches. Merging in either direction (and potentially criss-crossing) should be relatively painless. Merging history should be stored in SVN to the greatest extent possible. I've looked at both git-svn and bzr-svn and neither seems to be up to the job—basically, given the revision history they can export from the SVN repository, they can't seem to do any better a job handling merges than SVN can. For example, after cloning the repository with git, the revision history for my branch shows the original branch off of trunk, but git doesn't "see" any of the interim SVN merges as "native" merges—the revision history is one long line. As a result, any attempts to merge from trunk in git yield just as many conflicts as an SVN merge would. (Besides, the git-svn documentation explicitly warns against using git to merge between branches.) Is there a way to adjust my workflow to make git satisfy the above requirements? Maybe I just need tips or tricks (or a separate merging tool?) to help SVN be better at merging into branches?

    Read the article

  • Solutions for working with multiple branches in ASP.Net

    - by Corey McKinnon
    At work, we are often working on multiple branches of our product at one time. For example, right now, we have a maintenance branch, a branch with code just going to QA, and a branch for a new major initiative, that won't be merged for some time now. Our web project is set up to use IIS, so every time we switch to a different branch, we have to go in to IIS Admin and change the path on the virtual directory, then reset IIS, and sometimes even restart Visual Studio, to avoid getting build errors. Is there any way to simplify this, other than not having our web project set up as a virtual directory? I'm not sure we want to make that change at this point. What do you do to make this easier, assuming you do this? Corey @RedWolves, virtual machines would definitely work, but I'm not sure it would be any simpler, especially for some of the other developers on my team, which is partly why I'm looking for more simplicity. @Dan, we're not able to change source control providers, unfortunately. @pix0r, that's something I'll try when I get back to work. Thanks for the suggestion. @Haacked, I'll have to give that a try too, but I think we have some issues with why that won't work (I can't remember exactly why right now; this application was originally written in .Net 1.1, pre-Cassini, and I can't remember if we tried it when we upgraded to 2.0 or not). Thanks all for the responses so far.

    Read the article

  • NHibernate Legacy Database Mappings Impossible?

    - by Corey Coogan
    I'm hoping someone can help me with mapping a legacy database. The problem I'm describing here has plagued others, yet I was unable to find a real good solution around the web. DISCLAIMER: this is a legacy DB. I have no control over the composite keys. They suck and can't be changed no matter much you tell me they suck. I have 2 tables, both with composite keys. One of the keys from one table is used as part of the key to get a collection from the other table. In short, the keys don't fully match between the table. ClassB is used everywhere I would like to avoid adding properties for the sake of this mapping if possible. public class ClassA { //[PK] public string SsoUid; //[PK] public string PolicyNumber; public IList<ClassB> Others; //more properties.... } public class ClassB { //[PK] public string PolicyNumber; //[PK] public string PolicyDateTime; //more properties } I want to get an instance of ClassA and get all ClassB rows that match PolicyNumber. I am trying to get something going with a one-to-many, but I realize that this may technically be a many-to-many that I am just treating as one-to-many. I've tried using an association class but didn't get far enough to see if it works. I'm new to these more complex mappings and am looking for advice. I'm open to pretty much any ideas. Thanks, Corey

    Read the article

  • Post a form from asp to asp.Net

    - by Atomiton
    I have a classic asp application. I want to post a contest form from that page to an Asp.Net form. The reason is that I want to use a lot of logic i have built into an Asp.Net page for validation before entering into the database and I don't know asp very well. Not to mention asp.Net being more secure. What's the best way to accomplish this goal? My thoughts are as follows: My asp Page: <html> <body> <form action="/Contests/entry.aspx" method="post"> Name: <input type="text" name="fname" size="20" /> Last Name: <input type="text" name="lname" size="20" /> <input type="submit" value="Submit" /> </form> </body> </html> aspx page is running in a Virtual Directory and would handle anything posted to it. Is this possible, or does aspx prevent this kind of thing? I ( preferably ) don't want to create the form in aspx as my colleague wants to have control of the page and build the html himself and I don't want the hassle of constantly changing it. Are there caveats I need to consider? What roadblocks will I run into? How do I access the Posted Form Values? Request.Form?

    Read the article

  • Would OpenID or OAuth work for authorization/authentication on a distributed web service?

    - by David Eyk
    We're in the early stages of designing a RESTful/resource-oriented web service API for a computational lingustics application. Because many of the resources we plan to serve are rights-encumbered, a key design decision has been to specify the platform so that each resource provider can expose their own web service that complies with the API spec. This way, the rights owner maintains control over their content (and thus the ability to throttle or deny access at will) and a direct relationship with the consumer, while still being able to participate in in the collaborative network. At the same time, to simplify the job of writing a client for this service, we want to allow a client access to the distributed service through one end-point, with the server handling content negotiation and retrieval from the appropriate providers. Right now, we're at an impasse on authentication/authorization schemes. One of our number has argued for the (technical) simplicity of a central authentication registry, but others are concerned about the organizational complexity of such a scheme. It seems to me, based on an albeit limited understanding of the technologies, that a combination of OpenID and OAuth would do the trick, with a client authenticating with the end-point via OpenID, and the server taking action on the user's behalf with the various content providers using OAuth. I've only ever seen implementations (e.g. stackoverflow, twitter, etc.) where a human was present to intervene, and I still need to do more research on these technologies. Would a scheme like this work for an automated web service, or would it make the client too difficult to implement and operate?

    Read the article

  • SQL Server to PostgreSQL - Migration and design concerns

    - by youwhut
    Currently migrating from SQL Server to PostgreSQL and attempting to improve a couple of key areas on the way: I have an Articles table: CREATE TABLE [dbo].[Articles]( [server_ref] [int] NOT NULL, [article_ref] [int] NOT NULL, [article_title] [varchar](400) NOT NULL, [category_ref] [int] NOT NULL, [size] [bigint] NOT NULL ) Data (comma delimited text files) is dumped on the import server by ~500 (out of ~1000) servers on a daily basis. Importing: Indexes are disabled on the Articles table. For each dumped text file Data is BULK copied to a temporary table. Temporary table is updated. Old data for the server is dropped from the Articles table. Temporary table data is copied to Articles table. Temporary table dropped. Once this process is complete for all servers the indexes are built and the new database is copied to a web server. I am reasonably happy with this process but there is always room for improvement as I strive for a real-time (haha!) system. Is what I am doing correct? The Articles table contains ~500 million records and is expected to grow. Searching across this table is okay but could be better. i.e. SELECT * FROM Articles WHERE server_ref=33 AND article_title LIKE '%criteria%' has been satisfactory but I want to improve the speed of searching. Obviously the "LIKE" is my problem here. Suggestions? SELECT * FROM Articles WHERE article_title LIKE '%criteria%' is horrendous. Partitioning is a feature of SQL Server Enterprise but $$$ which is one of the many exciting prospects of PostgreSQL. What performance hit will be incurred for the import process (drop data, insert data) and building indexes? Will the database grow by a huge amount? The database currently stands at 200 GB and will grow. Copying this across the network is not ideal but it works. I am putting thought into changing the hardware structure of the system. The thought process of having an import server and a web server is so that the import server can do the dirty work (WITHOUT indexes) while the web server (WITH indexes) can present reports. Maybe reducing the system down to one server would work to skip the copying across the network stage. This one server would have two versions of the database: one with the indexes for delivering reports and the other without for importing new data. The databases would swap daily. Thoughts? This is a fantastic system, and believe it or not there is some method to my madness by giving it a big shake up. UPDATE: I am not looking for help with relational databases, but hoping to bounce ideas around with data warehouse experts.

    Read the article

  • DATE lookup table (1990/01/01:2041/12/31)

    - by Frank Developer
    I use a DATE's master table for looking up dates and other values in order to control several events, intervals and calculations within my app. It has rows for every single day begining from 01/01/1990 to 12/31/2041. One example of how I use this lookup table is: A customer pawned an item on: JAN-31-2010 Customer returns on MAY-03-2010 to make an interest pymt to avoid forfeiting the item. If he pays 1 months interest, the employee enters a "1" and the app looks-up the pawn date (JAN-31-2010) in date master table and puts FEB-28-2010 in the applicable interest pymt date. FEB-28 is returned because FEB-31's dont exist! If 2010 were a leap-year, it would've returned FEB-29. If customer pays 2 months, MAR-31-2010 is returned. 3 months, APR-30... If customer pays more than 3 months or another period not covered by the date lookup table, employee manually enters the applicable date. Here's what the date lookup table looks like: { Copyright 1990:2010, Frank Computer, Inc. } { DBDATE=YMD4- (correctly sorted for faster lookup) } CREATE TABLE datemast ( dm_lookup DATE, {lookup col used for obtaining values below} dm_workday CHAR(2), {NULL=Normal Working Date,} {NW=National Holiday(Working Date),} {NN=National Holiday(Non-Working Date),} {NH=National Holiday(Half-Day Working Date),} {CN=Company Proclamated(Non-Working Date),} {CH=Company Proclamated(Half-Day Working Date)} {several other columns omitted} dm_description CHAR(30), {NULL, holiday description or any comments} dm_day_num SMALLINT, {number of elapsed days since begining of year} dm_days_left SMALLINT, (number of remaining days until end of year} dm_plus1_mth DATE, {plus 1 month from lookup date} dm_plus2_mth DATE, {plus 2 months from lookup date} dm_plus3_mth DATE, {plus 3 months from lookup date} dm_fy_begins DATE, {fiscal year begins on for lookup date} dm_fy_ends DATE, {fiscal year ends on for lookup date} dm_qtr_begins DATE, {quarter begins on for lookup date} dm_qtr_ends DATE, {quarter ends on for lookup date} dm_mth_begins DATE, {month begins on for lookup date} dm_mth_ends DATE, {month ends on for lookup date} dm_wk_begins DATE, {week begins on for lookup date} dm_wk_ends DATE, {week ends on for lookup date} {several other columns omitted} ) IN "S:\PAWNSHOP.DBS\DATEMAST"; Is there a better way of doing this or is it a cool method?

    Read the article

< Previous Page | 938 939 940 941 942 943 944 945 946 947 948 949  | Next Page >