Search Results

Search found 66410 results on 2657 pages for 'web deployment project'.

Page 218/2657 | < Previous Page | 214 215 216 217 218 219 220 221 222 223 224 225  | Next Page >

  • Is it the job of a developer to suggest IT requirements?

    - by anything
    I am the only developer working on a web application which is nearing to its end. Now we are looking into making it Live in maybe a couple of months time. This is a web application for a non IT company. Though they have their own internal IT team, they have asked me on what will be the hardware requirements for the live servers eg. RAM, 32 bit or 64 bit. Shouldn't the internal IT team be doing this or since I am the only person working on the project is it my resposiblity to let them know of the any specific hardware requiremnts which may impact the performance of the project? The reason I am asking this question is that, I have not this before. All the times I used to be given a server and asked to deploy apps on it. I never used to worry about the server configuration etc.

    Read the article

  • How many tasks to plan beforehand [closed]

    - by no__seriously
    As for my daily routine. Every morning when I come to work, I look at the items of my todo-list inbox (noted from the previous day). For each task I think about on which day I should get started and then group them accordingly. Once that's finished, I get started with my actual schedule for the day. Now, this pre-planning for each task (which could be concerning user interface to compiler programming) is mostly pretty sketchy. Serious thoughts about design and implementation comes when the task is about to be tackled. This approach works for me and I can't really complain. But I'm wondering. Since I'm personally most productive during the morning, would it make sense to already go into a deeper level of planning right away for each task? Or is that unproductive and would rather confuse than clarify? I think the latter. How do you handle your task management for each task / project and how far do you go with planning before even getting started with that item?

    Read the article

  • How would you manage development between many Staging branches?

    - by Trip
    We have a Staging Branch. then we came out with a Beta branch for users to move whenever they wanted to from old Production branch to the new features. Our plan seemed simple, we test on Staging, when items get QA'd, they get cherry-picked and deploy to Beta. Here's the problem! A bug will discreetly make its way on to Beta, and since Beta is a production environment, it needs fixes fast and accurate. But not all the QA's got done. Enter Git hell.. So I find a problem on Beta. No sweat, its already been fixed on Staging, but when I go to cherry-pick the item over, Beta barely has any of the other pre-requisites of code to implement this small change. Now Beta has a little here and a little there, and I can't imagine it as a code base being as stable as Staging. What's more, is I'm dealing with some insane Git conflicts, and having to monkey patch a bunch of things to make up for what Beta hasn't caught up with Staging. Can someone polite or non-polite terms, tell me what we're doing wrong here as far as assembling this project? Any awesome recommendations or workarounds or alternatives to the system we came up with?

    Read the article

  • How to update application files using patching?

    - by Marek
    I am not interested in any auto update solution, such as ClickOnce or the MS Updater Block. For anyone feeling the urge to ask why not: I am already using these and there is nothing wrong with them, I would just like to learn about any efficient alternatives. I would like to publish patches = small differences that will modify existing files of the deployment with the smallest possible delta. Not only code needs to be patched, but also resource files. Patching the running code can be accomplished by maintaining two separate synchronized copies of the deployment (no on the fly changes to the running executable are required). The application itself can be xcopy deployed (to avoid MSI auto-correcting the modified files or breaking ClickOnce signatures). I would like to learn how to handle different versions of patches (e.g. there is a patch issued that fixes one error and later another patch that fixes another error (in the same file) - users may have any combination of these and there comes a third patch - in text files, this may be easy to implement, but how about executable files? (native Win32 code vs. .NET, any difference?) If the first problem is too hard to solve or unsolvable for executables, I would like to at least learn if there is a solution that implements simple patching with serial revisions - in order to install revision 5, user must have all previous revisions installed to ensure validity of the deployment. Are there any existing solutions to accomplish this? NOTE: There are a few questions on SO that may seem like duplicates, but none with a good answer. This question is about the Windows platform, preferably .NET.

    Read the article

  • bin-deploying DLLs banned in leiu of GAC on shared IIS 6 servers

    - by craigmoliver
    I need to solicit feedback about a recent security policy change at an organization I work with. They have recently banned the bin-deployment of DLLs to shared IIS 6 application servers. These servers host many isolated web application pools. The new rules require all DLLs to be installed in GAC. The is a problem for me because I bin-deploy several dlls including the ASP.NET MVC Framework, HTML Agility Pack, ELMAH, and my own shared class libraries. I do this because: Eliminates web application server dependencies to the Global Assembly Cache. Allows me (the developer) to have control of what goes on inside my application. Enables the application to deployed as a "package". Removes application deployment burden from the server administrators. Now, here are my questions. From a security perspective what are the advantages to using the GAC vs. bin-deployment? Is it possible to host multiple versions of the same DLL in the GAC? Has anyone run into similar restrictions?

    Read the article

  • How to correctly deploy Adobe Reader 9.1

    - by Ben Gillam
    Hi I have recently tried to deploy Adobe Reader 9.1 onto our network here. (SBS 2003 server and XP Workstations) I followed the instructions for the extraction of the installer and .msi and then creating a .mst transform file to set custom options. (Suppress EULA, dont create desktop icon etc) I then added the package to my deployment GPO applied the relevant .mst file and preceded to deploy accross the network. The software package is computer assigned to be installed prior to logon, to avoid user permissions issues. The package deploys correctly to computers and will run perfectly fine if you run from a shortcut, however when trying to view a pdf from within a web browser it fails with the following message. "The adobe acrobat/reader that is running can not be used to view PDF files in a web browser. Adobe Acrobat/Reader version 8 or 9 is required. Please exit and try again" I have found many pages on google refering to this problem, but none appear to be in relation the problems I have found. http :// kb2.adobe.com/cps/405/kb405461.html These fixes recommend correcting a registry entry (which i should mention is missing after the deployed installation. However this does not work. Switching off display in a browser - Seems to defeat the object of fixing the problem Removing old versions - There arent any. Trying with a different user - This affects all users of all privalige levels on all computers. On my workstation I uninstalled Acrobat Reader 9.1 then reinstalled manually using the same installation source files and it works fine. has anyone sucsessfully deployed AR9.1 on their domain and if so how? For the time being I have downloaded the older 8.1.3 release and deployed this in the same way which works fine, but would like to be using the up to date version. Thanks

    Read the article

  • How to deploy new instances of the same application (on 1 server) automatically?

    - by Intru
    I'm working on a SaaS application where each customer runs its own version of the application. All the application instances currently run on a single server. This works quite well for us (we need less resources in total). The application doesn't use a lot of resources, so even a small VPS would be overkill (and more expensive). Adding a new customer is currently quite a bit of work: Create a user that is allowed to ssh Create a new MySQL database and user Create a virtual host for the application Log in with the new user, do a git checkout of the application (in the right location) Create tables in the new database, and add some init data Add some cron jobs Create a first user that can log in Add this new instance to capistrano What would be the best way to automate these tasks? Are the applications that can (given proper configuration) do this? Ideally this should be usable for a sales-person (so something web-based). I could write a (bash) script that does most of these tasks, and then maybe add a small web-based wrapper where someone could provider the domain/default user information. Of course, this would also require a delete-script, since some customers will eventually leave, which means that you need a list of all existing customers/instances.

    Read the article

  • How repair/uninstall Visual Web Developer 2010 Beta 2?

    - by Sprinter
    I cannot uninstall Visual Web Developer 2010 Beta 2. When I was trying to uninstall the first time, the power went off to my machine and screwed up the Beta 2 installation. I cannot find a Visual Web Developer 2010 Beta 2 download on the Microsoft website any longer to repair the Beta 2 installation. How can I get VWD 2010 Beta 2 off of my system so I can install the new release?

    Read the article

  • Is there any performance comparison between Perl web frameworks?

    - by DVK
    I have seen mentions (which sounded like unsubstantiated opinions, and dated ones at that) that Embperl is the fastest Perl web framework. I was wondering if there's a consensus on the relative speed of the major stable Perl web frameworks, or ideally, some sort of fact-based performance comparisons between implementations of the same sample webapps, or individual functionalities (e.g. session handling or form data processing), etc...?

    Read the article

  • Windows Azure Evolution &ndash; Deploy Web Sites (WAWS Part 3)

    - by Shaun
    This is the sixth post of my Windows Azure Evolution series. After talked a bit about the new caching preview feature in the previous one, let’s back to the Windows Azure Web Sites (WAWS).   Git and GitHub Integration In the third post I introduced the overview functionality of WAWS and demonstrated how to create a WordPress blog through the build-in application gallery. And in the fourth post I covered how to use the TFS service preview to deploy an ASP.NET MVC application to the web site through the TFS integration. WAWS also have the Git integration. I’m not going to talk very detailed about the Git and GitHub integration since there are a bunch of information on the internet you can refer to. To enable the Git just go to the web site item in the developer portal and click the “Set up Git publishing”. After specified the username and password the windows azure platform will establish the Git integration and provide some basic guide. As you can see, you can download the Git binaries, commit the files and then push to the remote repository. Regarding the GitHub, since it’s built on top of Git it should work. Maarten Balliauw have a wonderful post about how to integrate GitHub to Windows Azure Web Site you can find here.   WebMatrix 2 RC WebMatrix is a lightweight web application development tool provided by Microsoft. It utilizes WebDeploy or FTP to deploy the web application to the server. And in WebMatrix 2.0 RC it added the feature to work with Windows Azure. First of all we need to download the latest WebMatrix 2 through the Web Platform Installer 4.0. Just open the WebPI and search “WebMatrix”, or go to its home page download its web installer. Once we have WebMatrix 2, we need to download the publish file of our WAWS. Let’s go to the developer portal and open the web site we want to deploy and download the publish file from the link on the right hand side. This file contains the necessary information of publishing the web site through WebDeploy and FTP, which can be used in WebMatrix, Visual Studio, etc.. Once we have the publish file we can open the WebMatrix, click the Open Site, Remote Site. Then it will bring up a dialog where we can input the information of the remote site. Since we have our publish file already, we can click the “Import publish settings” and select the publish file, then we can see the site information will be populated automatically. Click OK, the WebMatrix will connect to the remote site, which is the WAWS we had deployed already, retrieve the folders and files information. We can open files in WebMatrix and modify. But since WebMatrix is a lightweight web application tool, we cannot update the backend C# code. So in this case, we will modify the frontend home page only. After saved our modification, WebMatrix will compare the files between in local and remote and then it will only upload the modified files to Windows Azure through the connection information in the publish file. Since it only update the files which were changed, this minimized the bandwidth and deployment duration. After few seconds we back to the website and the modification had been applied.   Visual Studio and WebDeploy The publish file we had downloaded can be used not only in WebMatrix but also Visual Studio. As we know in Visual Studio we can publish a web application by clicking the “Publish” item from the project context menu in the solution explorer, and we can specify the WebDeploy, FTP or File System for the publish target. Now we can use the WAWS publish file to let Visual Studio publish the web application to WAWS. Let’s create a new ASP.NET MVC Web Application in Visual Studio 2010 and then click the “Publish” in solution explorer. Once we have the Windows Azure SDK 1.7 installed, it will update the web application publish dialog. So now we can import the publish information from the publish file. Select WebDeploy as the publish method. We can select FTP as well, which is supported by Windows Azure and the FTP information was in the same publish file. In the last step the publish wizard can check the files which will be uploaded to the remote site before the actually publishing. This gives us a chance to review and amend the files. Same as the WebMatrix, Visual Studio will compare the files between local and WAWS and determined which had been changed and need to be published. Finally Visual Studio will publish the web application to windows azure through WebDeploy protocol. Once it finished we can browse our website.   FTP Deployment The publish file we downloaded contains the connection information to our web site via both WebDeploy and FTP. When using WebMatrix and Visual Studio we can select WebDeploy or FTP. WebDeploy method can be used very easily from WebMatrix and Visual Studio, with the file compare feature. But the FTP gives more flexibility. We can use any FTP client to upload files to windows azure regardless which client and OS we are using. Open the publish file in any text editor, we can find the connection information very easily. As you can see the publish file is actually a XML file with WebDeploy and FTP information in plain text attributes. And once we have the FTP URL, username and password, when can connect to the site and upload and download files. For example I opened FileZilla and connected to my WAWS through FTP. Then I can download files I am interested in and modify them on my local disk. Then upload back to windows azure through FileZilla. Then I can see the new page.   Summary In this simple and quick post I introduced vary approaches to deploy our web application to Windows Azure Web Site. It supports TFS integration which I mentioned previously. It also supports Git and GitHub, WebDeploy and FTP as well.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Fraud Detection with the SQL Server Suite Part 2

    - by Dejan Sarka
    This is the second part of the fraud detection whitepaper. You can find the first part in my previous blog post about this topic. My Approach to Data Mining Projects It is impossible to evaluate the time and money needed for a complete fraud detection infrastructure in advance. Personally, I do not know the customer’s data in advance. I don’t know whether there is already an existing infrastructure, like a data warehouse, in place, or whether we would need to build one from scratch. Therefore, I always suggest to start with a proof-of-concept (POC) project. A POC takes something between 5 and 10 working days, and involves personnel from the customer’s site – either employees or outsourced consultants. The team should include a subject matter expert (SME) and at least one information technology (IT) expert. The SME must be familiar with both the domain in question as well as the meaning of data at hand, while the IT expert should be familiar with the structure of data, how to access it, and have some programming (preferably Transact-SQL) knowledge. With more than one IT expert the most time consuming work, namely data preparation and overview, can be completed sooner. I assume that the relevant data is already extracted and available at the very beginning of the POC project. If a customer wants to have their people involved in the project directly and requests the transfer of knowledge, the project begins with training. I strongly advise this approach as it offers the establishment of a common background for all people involved, the understanding of how the algorithms work and the understanding of how the results should be interpreted, a way of becoming familiar with the SQL Server suite, and more. Once the data has been extracted, the customer’s SME (i.e. the analyst), and the IT expert assigned to the project will learn how to prepare the data in an efficient manner. Together with me, knowledge and expertise allow us to focus immediately on the most interesting attributes and identify any additional, calculated, ones soon after. By employing our programming knowledge, we can, for example, prepare tens of derived variables, detect outliers, identify the relationships between pairs of input variables, and more, in only two or three days, depending on the quantity and the quality of input data. I favor the customer’s decision of assigning additional personnel to the project. For example, I actually prefer to work with two teams simultaneously. I demonstrate and explain the subject matter by applying techniques directly on the data managed by each team, and then both teams continue to work on the data overview and data preparation under our supervision. I explain to the teams what kind of results we expect, the reasons why they are needed, and how to achieve them. Afterwards we review and explain the results, and continue with new instructions, until we resolve all known problems. Simultaneously with the data preparation the data overview is performed. The logic behind this task is the same – again I show to the teams involved the expected results, how to achieve them and what they mean. This is also done in multiple cycles as is the case with data preparation, because, quite frankly, both tasks are completely interleaved. A specific objective of the data overview is of principal importance – it is represented by a simple star schema and a simple OLAP cube that will first of all simplify data discovery and interpretation of the results, and will also prove useful in the following tasks. The presence of the customer’s SME is the key to resolving possible issues with the actual meaning of the data. We can always replace the IT part of the team with another database developer; however, we cannot conduct this kind of a project without the customer’s SME. After the data preparation and when the data overview is available, we begin the scientific part of the project. I assist the team in developing a variety of models, and in interpreting the results. The results are presented graphically, in an intuitive way. While it is possible to interpret the results on the fly, a much more appropriate alternative is possible if the initial training was also performed, because it allows the customer’s personnel to interpret the results by themselves, with only some guidance from me. The models are evaluated immediately by using several different techniques. One of the techniques includes evaluation over time, where we use an OLAP cube. After evaluating the models, we select the most appropriate model to be deployed for a production test; this allows the team to understand the deployment process. There are many possibilities of deploying data mining models into production; at the POC stage, we select the one that can be completed quickly. Typically, this means that we add the mining model as an additional dimension to an existing DW or OLAP cube, or to the OLAP cube developed during the data overview phase. Finally, we spend some time presenting the results of the POC project to the stakeholders and managers. Even from a POC, the customer will receive lots of benefits, all at the sole risk of spending money and time for a single 5 to 10 day project: The customer learns the basic patterns of frauds and fraud detection The customer learns how to do the entire cycle with their own people, only relying on me for the most complex problems The customer’s analysts learn how to perform much more in-depth analyses than they ever thought possible The customer’s IT experts learn how to perform data extraction and preparation much more efficiently than they did before All of the attendees of this training learn how to use their own creativity to implement further improvements of the process and procedures, even after the solution has been deployed to production The POC output for a smaller company or for a subsidiary of a larger company can actually be considered a finished, production-ready solution It is possible to utilize the results of the POC project at subsidiary level, as a finished POC project for the entire enterprise Typically, the project results in several important “side effects” Improved data quality Improved employee job satisfaction, as they are able to proactively contribute to the central knowledge about fraud patterns in the organization Because eventually more minds get to be involved in the enterprise, the company should expect more and better fraud detection patterns After the POC project is completed as described above, the actual project would not need months of engagement from my side. This is possible due to our preference to transfer the knowledge onto the customer’s employees: typically, the customer will use the results of the POC project for some time, and only engage me again to complete the project, or to ask for additional expertise if the complexity of the problem increases significantly. I usually expect to perform the following tasks: Establish the final infrastructure to measure the efficiency of the deployed models Deploy the models in additional scenarios Through reports By including Data Mining Extensions (DMX) queries in OLTP applications to support real-time early warnings Include data mining models as dimensions in OLAP cubes, if this was not done already during the POC project Create smart ETL applications that divert suspicious data for immediate or later inspection I would also offer to investigate how the outcome could be transferred automatically to the central system; for instance, if the POC project was performed in a subsidiary whereas a central system is available as well Of course, for the actual project, I would repeat the data and model preparation as needed It is virtually impossible to tell in advance how much time the deployment would take, before we decide together with customer what exactly the deployment process should cover. Without considering the deployment part, and with the POC project conducted as suggested above (including the transfer of knowledge), the actual project should still only take additional 5 to 10 days. The approximate timeline for the POC project is, as follows: 1-2 days of training 2-3 days for data preparation and data overview 2 days for creating and evaluating the models 1 day for initial preparation of the continuous learning infrastructure 1 day for presentation of the results and discussion of further actions Quite frequently I receive the following question: are we going to find the best possible model during the POC project, or during the actual project? My answer is always quite simple: I do not know. Maybe, if we would spend just one hour more for data preparation, or create just one more model, we could get better patterns and predictions. However, we simply must stop somewhere, and the best possible way to do this, according to my experience, is to restrict the time spent on the project in advance, after an agreement with the customer. You must also never forget that, because we build the complete learning infrastructure and transfer the knowledge, the customer will be capable of doing further investigations independently and improve the models and predictions over time without the need for a constant engagement with me.

    Read the article

  • How to get cursor to follow text when reading a web page?

    - by Jack BeNimble
    I know this isn't strictly program related, but I think I've seen this answer on SO before and lost track of it. The specific question has to do with reading an electronic document. I find it helpful to move the cursor across the words as I'm reading them. This works great with Word documents, but I'm unable to do it with web pages. Is there a way to make a web page see and respond to cursor movement?

    Read the article

  • How do I access SourceGear Web services using SOAP::Lite?

    - by user565793
    For some reason SourceGear provide an undocumented Web service on their installations. They actually ask developers to use the API instead because the Web service is kinda messy, but this is a problem in my case because I cannot use this API on a Perl environment, so their solution for my specific case is to use the Web service. This shouldn't be a problem. Using SOAP::Lite I have connected to several Web services in the past in the same way. But the lack of documentation is a major chaos if you don't know where the SOAP calls can be made. I only have an XML to decipher where to and how to make these calls. It would be great if a real SOAP genius could help me out in this. This is an example of the login call and the expected response: Request POST /fortress/dragnetwebservice.asmx HTTP/1.1 Host: velecloudserver Content-Type: text/xml; charset=utf-8 Content-Length: length SOAPAction: "http://www.sourcegear.com/schemas/dragnet/LoginPlainText" <?xml version="1.0" encoding="utf-8"?> <soap:Envelope xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/"> <soap:Body> <LoginPlainText xmlns="http://www.sourcegear.com/schemas/dragnet"> <strLogin>string</strLogin> <strPassword>string</strPassword> </LoginPlainText> </soap:Body> </soap:Envelope> Response HTTP/1.1 200 OK Content-Type: text/xml; charset=utf-8 Content-Length: length <?xml version="1.0" encoding="utf-8"?> <soap:Envelope xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/"> <soap:Body> <LoginPlainTextResponse xmlns="http://www.sourcegear.com/schemas/dragnet"> <LoginPlainTextResult>int</LoginPlainTextResult> <strAuthTicket>string</strAuthTicket> </LoginPlainTextResponse> </soap:Body> </soap:Envelope> I'm looking for a way to be able to assemble this somehow. This is my Perl example: my $soap = SOAP::Lite -> uri ('http://velecloudserver/fortress/dragnetwebservice.asmx') -> proxy('http://velecloudserver/fortress/dragnetwebservice.asmx/LoginPlainText'); my $som = $soap->call('LoginPlainText', SOAP::Data->name('LoginPlainText')->value( \SOAP::Data->value([ SOAP::Data->name('strLogin')->value( 'admin' ), SOAP::Data->name('strPassword')->value('Adm1234'), ])) ); Any tip would be appreciated.

    Read the article

  • WCF Duplex Interaction with Web Server

    - by Mark Struzinski
    Here is my scenario, and it is causing us a considerable amount of grief at the moment: We have a vendor web service which provides base level telephony functionality. This service has a SOAP api, which we are leveraging to build up a custom UI that is integrated into our in house web apps. The api functions on 2 levels. You make standard client calls into the service to initiate actions, such as Login, Place Call, Hang Up, etc. On a different thread, the service sends events back to the client to alert the user of things that are occurring on the system (agent successfully logged in, call was disconnected, etc). I implemented a WCF service to sit between the web server and the vendor service. This WCF service operates in duplex mode, establishing a 2 way connection with the web server. The web server makes outbound calls to the WCF service, which routes them to the vendor's web service. Events are received back to the WCF service, which passes them onto the web server via a callback channel on the WCF client. As events are received on the web server, they are placed into a hash table with the user's name as the key, and a .NET queue as the value to hold the event. Each event is enqueued to the agent who owns it. On a 2 second interval, the web page polls the web server via an ajax request to get new events for the logged in user. It hits the hash table for the user key, dequeues any events that are present, and serializes them back up to the web page. From there, they are processed in order and appropriate messages are displayed to the user. This implementation performs well in a single user scenario. The second I put more than 1 user on the system, I start getting frequent timeouts with the following CommunicationException: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond We are running Windows Server 2008 R2 both servers. Both the web app and WCF service are running on .NET 3.5. The WCF service is running under the net.tcp protocol in duplex mode. The web app is ASP.NET MVC 2. Has anyone dealt with anything like this scenario? Is there a more efficient way (or a widely accepted pattern) to implement this?

    Read the article

  • Is there any web service for getting tidal information?

    - by thor
    Has anyone come across a web service or plugin/code/calculation for getting information on tides - namely high and low tide times? I only need this information for one location (outwith the US) but I need it on an ongoing basis so a web service that I can hit once a day or something would be ideal.

    Read the article

  • Why does my JSF + Spring web application output JSF source code instead of interpreted HTML page?

    - by Corvus
    I'm new to both JSF and Spring Framework and I'm trying to figure out how to make them work together. My current problem is that application outputs my JSF files without interpreting them. Here are some snippets of my code which I believe might be relevant: dispatcher-servlet.xml <bean id="urlMapping" class="org.springframework.web.servlet.handler.SimpleUrlHandlerMapping"> <property name="mappings"> <props> <prop key="login.htm">loginController</prop> </props> </property> </bean> <bean id="viewResolver" class="org.springframework.web.servlet.view.InternalResourceViewResolver" p:prefix="/WEB-INF/pages/" p:suffix=".xhtml" /> <bean name="loginController" class="controller.LoginController" /> loginController public class LoginController extends MultiActionController { public ModelAndView login(HttpServletRequest request, HttpServletResponse response) throws Exception { System.out.println("LOGIN"); return new ModelAndView("login"); } WEB-INF/pages/login.xhtml <?xml version='1.0' encoding='UTF-8' ?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xmlns:h="http://java.sun.com/jsf/html"> <h:head> <title>#{message.log}</title> </h:head> <h:body> <h:form> <h:outputLabel value="#{message.username}" for="userName"> <h:inputText id="userName" value="#{User.name}" /> </h:outputLabel> <h:commandButton value="#{message.loggin}" action="#{User.login}" /> </h:form> </h:body> </html> Any ideas where the problem might be? Does this code make any sense at all? I'm well aware of fact, that probably completely sucks and I'll be glad to here WHY it sucks and how to make it better. Thanks :)

    Read the article

< Previous Page | 214 215 216 217 218 219 220 221 222 223 224 225  | Next Page >