Search Results

Search found 161595 results on 6464 pages for 'user defined data type'.

Page 17/6464 | < Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >

  • Test Data in a Distributed System

    - by Davin Tryon
    A question that has been vexing me lately has been about how to effectively test (end-to-end) features in a distributed system. Particuarly, how to effectively manage (through time) test data for feature testing. The system in question is a typical SOA setup. The composition is done in JavaScript when call to several REST APIs. Each service is built as an independent block. Each service has some kind of persistent storage (SQL Server in most cases). The main issue at the moment is how to approach test data when testing end-to-end features. Functional end-to-end testing occurs through the UI, and it is therefore necessary for test data to be set up before the test run (this could be manual or automated testing). As is typical in a distributed system, identifiers from one service are used as a link in another service. So, some level of synchronization needs to be present in the data to effectively test. What is the best way to manage and set up this data after a successful deployment to a test environment? For example, is it better to manage this test data inside each service? Or package it together with the testing suite? Does that testing suite exist as a separate project? I'm interested in design guidance about how to store and manage this test data as the application features evolve.

    Read the article

  • Customizing the NUnit GUI for data-driven testing

    - by rwong
    My test project consists of a set of input data files which is fed into a piece of legacy third-party software. Since the input data files for this software are difficult to construct (not something that can be done intentionally), I am not going to add new input data files. Each input data file will be subject to a set of "test functions". Some of the test functions can be invoked independently. Other test functions represent the stages of a sequential operation - if an earlier stage fails, the subsequent stages do not need to be executed. I have experimented with the NUnit parametrized test case (TestCaseAttribute and TestCaseSourceAttribute), passing in the list of data files as test cases. I am generally satisfied with the the ability to select the input data for testing. However, I would like to see if it is possible to customize its GUI's tree structure, so that the "test functions" become the children of the "input data". For example: File #1 CheckFileTypeTest GetFileTopLevelStructureTest CompleteProcessTest StageOneTest StageTwoTest StageThreeTest File #2 CheckFileTypeTest GetFileTopLevelStructureTest CompleteProcessTest StageOneTest StageTwoTest StageThreeTest This will be useful for identifying the stage that failed during the processing of a particular input file. Is there any tips and tricks that will enable the new tree layout? Do I need to customize NUnit to get this layout?

    Read the article

  • Data Management Business Continuity Planning

    Business Continuity Governance In order to ensure data continuity for an organization, they need to ensure they know how to handle a data or network emergency because all systems have the potential to fail. Data Continuity Checklist: Disaster Recovery Plan/Policy Backups Redundancy Trained Staff Business Continuity Policies In order to protect data in case of any emergency a company needs to put in place a Disaster recovery plan and policies that can be executed by IT staff to ensure the continuity of the existing data and/or limit the amount of data that is not contiguous.  A disaster recovery plan is a comprehensive statement of consistent actions to be taken before, during and after a disaster, according to Geoffrey H. Wold. He also states that the primary objective of disaster recovery planning is to protect the organization in the event that all or parts of its operations and/or computer services are rendered unusable. Furthermore, companies can mandate through policies that IT must maintain redundant hardware in case of any hardware failures and redundant network connectivity incase the primary internet service provider goes down.  Additionally, they can require that all staff be trained in regards to the Disaster recovery policy to ensure that all parties evolved are knowledgeable to execute the recovery plan. Business Continuity Procedures Business continuity procedure vary from organization to origination, however there are standard procedures that most originations should follow. Standard Business Continuity Procedures Backup and Test Backups to ensure that they work Hire knowledgeable and trainable staff  Offer training on new and existing systems Regularly monitor, test, maintain, and upgrade existing system hardware and applications Maintain redundancy regarding all data, and critical business functionality

    Read the article

  • How to choose how to store data?

    - by Eldros
    Give a man a fish and you feed him for a day. Teach a man to fish and you feed him for a lifetime. - Chinese Proverb I could ask what kind of data storage I should use for my actual project, but I want to learn to fish, so I don't need to ask for a fish each time I begin a new project. So, until I used two methods to store data on my non-game project: XML files, and relational databases. I know that there is also other kind of database, of the NoSQL kind. However I wouldn't know if there is more choice available to me, or how to choose in the first place, aside arbitrary picking one. So the question is the following: How should I choose the kind of data storage for a game project? And I would be interested on the following criterion when choosing: The size of the project. The platform targeted by the game. The complexity of the data structure. Added Portability of data amongst many project. Added How often should the data be accessed Added Multiple type of data for a same application Any other point you think is of interest when deciding what to use. EDIT I know about Would it be better to use XML/JSON/Text or a database to store game content?, but thought it didn't address exactly my point. Now if I am wrong, I would gladely be shown the error in my ways.

    Read the article

  • Oracle User Productivity Kit Translation

    - by ultan o'broin
    Oracle's customers just love the User Productivity Kit (UPK). I hear only great things about it from our international customers at the Oracle Usability Advisory Board meetings too. The UPK is the perfect solution for enterprise applications training needs (I previously reviewed a fine book about UPK btw). One question I am often asked is how source content created using the UPK can be translated into another language. I spoke with Peter Maravelias, Principal Product Strategy Manager for UPK about this recently. UPK is already optimized for easy source-target translation already. There is even a solution for re-recording demos. Here's what you can do to get your source content into another language: Use UPK's ability to automatically translate events and actions. UPK comes with XML templates that allow you to accomplish this in 21 languages with a simple publishing action switch. These templates even deal with the tricky business of using gender-based translations. Spanish localization template sample Japanese localization template sample Use the Import and Export localization features to export additional custom content in a format like XLIFF, easily handled by translation tools. You could also export and import in Word format. Re-record the sound (audio) files that go with the recordings, one per screen. UPK's granular approach to the sound files means that timing isn't an option. Retiming demos isn't required. A tip here with sound files and XLFF-exported custom content is to facilitate translation context by avoiding explicit references to actions going on in the screen recordings. A text based storyboard with screenshots accompanying the sound files should also be provided to the translators. Provide a glossary of terms too. Use the re-record option in UPK to record any demo from a translated application. This will allow all the translated UI labels to be automatically captured. You may be required to resize any action events here due to text expansion issues. Of course, you will need translated data in the translated application too, so plan for this in advance. However, source-target language skills aren't required for the re-recording. The UPK Player itself, of course, is also available from Oracle along with content and doc in 21 languages. The Developer and Setup is also translated in a smaller number of languages. Check the Oracle UPK website for latest details. UPK is a super solution for global enterprise applications training deployments allowing source content to be translated into multiple languages easily. See this post on the UPK blog for more insight too!

    Read the article

  • Data Masking Pack 12.1.0.3 Certified with E-Business Suite 12.1.3

    - by Elke Phelps (Oracle Development)
    I'm pleased to announce the certification of the E-Business Suite 12.1.3 Data Masking Template for the Data Masking Pack with Enterprise Manager Cloud Control 12.1.0.3. You can use the Oracle Data Masking Pack with Oracle Enterprise Manager Grid Control 12c to scramble sensitive data in cloned E-Business Suite environments.     You may scramble data in E-Business Suite cloned environments with EM12.1.0.3 using the following template: E-Business Suite 12.1.3 Data Masking Template for Data Masking Pack with EM12c (Patch 18462641) What does data masking do in E-Business Suite environments? Application data masking does the following: De-identify the data:  Scramble identifiers of individuals, also known as personally identifiable information or PII.  Examples include information such as name, account, address, location, and driver's license number. Mask sensitive data:  Mask data that, if associated with personally identifiable information (PII), would cause privacy concerns.  Examples include compensation, health and employment information.   Maintain data validity:  Provide a fully functional application.  How can EBS customers use data masking? The Oracle E-Business Suite Template for Data Masking Pack can be used in situations where confidential or regulated data needs to be shared with other non-production users who need access to some of the original data, but not necessarily every table.  Examples of non-production users include internal application developers or external business partners such as offshore testing companies, suppliers or customers.  Due to data dependencies, scrambling E-Business Suite data is not a trivial task.  The data needs to be scrubbed in such a way that allows the application to continue to function. The template works with the Oracle Data Masking Pack and Oracle Enterprise Manager to obscure sensitive E-Business Suite information that is copied from production to non-production environments.  The Oracle E-Business Suite Template for Data Masking Pack is applied to a non-production environment with the Enterprise Manager Grid Control Data Masking Pack.  When applied, the Oracle E-Business Suite Template for Data Masking Pack will create an irreversibly scrambled version of your production database for development and testing. Is there a charge for this? Yes. You must purchase licenses for the Oracle Data Masking Pack to use the Oracle E-Business Suite 12.1.3 template. The Oracle E-Business Suite 12.1.3 Template for the Data Masking Pack is included with the Oracle Data Masking Pack license.  You can contact your Oracle account manager for more details about licensing. References Additional details and requirements are provided in the following My Oracle Support Note: Using Oracle E-Business Suite Release 12.1.3 Template for the Data Masking Pack with Oracle Enterprise Manager 12.1 Data Masking Tool (Note 1481916.1) Masking Sensitive Data in the Oracle Database Real Application Testing User's Guide 11g Release 2 (11.2) Related Articles Scrambling Sensitive Data in E-Business Suite E-Business Suite 12.1.3 Data Masking Certified with Enterprise Manager 12c

    Read the article

  • SQL SERVER – Automated Type Conversion using Expressor Studio

    - by pinaldave
    Recently I had an interesting situation during my consultation project. Let me share to you how I solved the problem using Expressor Studio. Consider a situation in which you need to read a field, such as customer_identifier, from a text file and pass that field into a database table. In the source file’s metadata structure, customer_identifier is described as a string; however, in the target database table, customer_identifier is described as an integer. Legitimately, all the source values for customer_identifier are valid numbers, such as “109380”. To implement this in an ETL application, you probably would have hard-coded a type conversion function call, such as: output.customer_identifier=stringToInteger(input.customer_identifier) That wasn’t so bad, was it? For this instance, programming this hard-coded type conversion function call was relatively easy. However, hard-coding, whether type conversion code or other business rule code, almost always means that the application containing hard-coded fields, function calls, and values is: a) specific to an instance of use; b) is difficult to adapt to new situations; and c) doesn’t contain many reusable sub-parts. Therefore, in the long run, applications with hard-coded type conversion function calls don’t scale well. In addition, they increase the overall level of effort and degree of difficulty to write and maintain the ETL applications. To get around the trappings of hard-coding type conversion function calls, developers need an access to smarter typing systems. Expressor Studio product offers this feature exactly, by providing developers with a type conversion automation engine based on type abstraction. The theory behind the engine is quite simple. A user specifies abstract data fields in the engine, and then writes applications against the abstractions (whereas in most ETL software, developers develop applications against the physical model). When a Studio-built application is run, Studio’s engine automatically converts the source type to the abstracted data field’s type and converts the abstracted data field’s type to the target type. The engine can do this because it has a couple of built-in rules for type conversions. So, using the example above, a developer could specify customer_identifier as an abstract data field with a type of integer when using Expressor Studio. Upon reading the string value from the text file, Studio’s type conversion engine automatically converts the source field from the type specified in the source’s metadata structure to the abstract field’s type. At the time of writing the data value to the target database, the engine doesn’t have any work to do because the abstract data type and the target data type are just the same. Had they been different, the engine would have automatically provided the conversion. ?Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Database, Pinal Dave, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology Tagged: SSIS

    Read the article

  • Add linux user with restricted access

    - by Dominik Str
    I need to create a user on linux with access rights only to one folder. Background: I have installed git on my virtual server (Debian). I also created a user for the repository. There is a lot of private data on the server. But all folders have read-access for others, because it's needed for the applications which run on the server. So the git-user can see all the data. I would like to restrict the git user only to the folder where the repository is installed. I also tried ACL, but it didn't work. Is there a better way to do this? Thanks in advance!

    Read the article

  • Port binding conflicts with "switch user" on Windows 7

    - by C-dizzle
    We are using the switch user function within Windows 7 under an active directory network. We have one application in particular that gives us an error: Only one usage of each socket address (protocol/network address/port) is normally permitted. bind Port 10001 Are there any other ports that can only be used at one time that might have an adverse effect on the other user? We try to mentor our users to use the log off function instead of switch user, but that doesn't always happen. As an alternative, is it possible to disable the 'switch user' button on our machines?

    Read the article

  • Grant a user access to directories shared by root (mod: 770)

    - by Paul Dinham
    I want to grant a user (username: paul) access to all directories shared by root with mod 770. I do it this way: groups root (here comes a list of groups in which root user is) usermod -a -G group1 paul usermod -a -G group2 paul usermod -a -G group3 paul ... All the 'group1', 'group2', 'group3' are seen in the group list of root user. However, after adding 'paul' to all groups above, he still can not write to directories shared by root user with mod 770. Did I do it wrongly?

    Read the article

  • How to relink user folders in Windows 7

    - by Jonathan
    The short story: Win7 lost track of my user folders location (desktop, my documents, my pictures etc...). They now reside on a different partition. How can I relink these folders? The long story: The way I partition my drives is: C: - SSD drive for Windows and Program Files D: - A large regular hard drive for all my user data The first thing I do after a fresh Win7 install is move my user folders to D:, by right clicking on these folders under C:\users\username\, choosing the Location tab and clicking on Move. I've just completed encryption of D: using TrueCrypt. It shows a lot of warnings before the encryption process, but (hrrmm...) it does not mention the fact that after encryption the data is located on a new drive letter, say E: This broke Win7's links to my special user folders. How can I relink these folders?

    Read the article

  • Windows XP dual screen problems, user account related

    - by Chris
    I have had this issue with a few laptops now and it looks like it is some sort of user account problem. Specifics of the system are: Dell Laptop Windows XP Pro SP3 Non-domain member computer DLP Projector connected to laptop via VGA I use this setup almost daily to do presentations, always the mirrored display mode where I can see on the laptop monitor the same thing that is displayed on the projector. Today, when I boot up, I get the mirrored display at the login screen, but after I log in, it switches to Extended Desktop (like two desktops side-by-side). Fn+F8 just cycles through all the normal settings except the mirrored display. I created a new user account on the computer and it performs normally. Mirrored display works as normal. I have run into this about 4 times now and it always can be solved by creating a new user account on the computer, and then all is well. I would like to either: 1. Find a way to reset the customized settings for a specific user account which would hopefully make this go away, or 2. Find the specific setting that causes this so that I can easily fix it when the problem comes up. Creating new user accounts is kind of a pain and a easy fix must be out there somewhere.

    Read the article

  • Oracle Data Warehouse and Big Data Magazine MAY Edition for Customers + Partners

    - by KLaker
    Follow us on The latest edition of our monthly data warehouse and big data magazine for Oracle customers and partners is now available. The content for this magazine is taken from the various data warehouse and big data Oracle product management blogs, Oracle press releases, videos posted on Oracle Media Network and Oracle Facebook pages. Click here to view the May Edition Please share this link http://flip.it/fKOUS to our magazine with your customers and partners This magazine is optimized for display on tablets and smartphones using the Flipboard App which is available from the Apple App store and Google Play store

    Read the article

  • XSLT 1.0 grouping to reformat element defined by date into element defined by task

    - by Daniel
    Hi folks, I have a tricky XSLT transformation and I'd like your advise My xml is formatted as below: <Person> <name>John</name> <date>June12</date> <workTime taskID=1>34</workTime> <workTime taskID=2>12</workTime> </Person> <Person> <name>John</name> <date>June12</date> <workTime taskID=1>21</workTime> <workTime taskID=2>11</workTime> </Person> The output xml should be: <Person> <name>John</name> <taskID>1</taskID> <workTime> <date>June12</date> <time>34</time> </worTime> <workTime> <date>June13</date> <time>21</time> </worTime> </Person> <Person> <name>John</name> <taskID>2</taskID> <workTime> <date>June12</date> <time>12</time> </worTime> <workTime> <date>June13</date> <time>11</time> </worTime> </Person> Essentially, as an input, a "Person" object gathers all the task/workTime for a specific date. As an output, I want the "Person" object to gather the date/workTime for a specific task. I need to use XLST 1.0. I've been trying to use grouping with key but get very puzzled. Appreciate your help. Daniel

    Read the article

  • Version control and data provenance in charts, slides, and marketing materials that derive from code ouput

    - by EMS
    I develop as part of a small team that mostly does research and statistics stuff. But from the output of our code, other teams often create promotional materials, slides, presentations, etc. We run into a big problem because the marketing team (non-programmers) tend to use Excel, Adobe products, or other tools to carry out their work, and just want easy-to-use data formats from us. This leads to data provenance problems. We see email chains with attachments from 6 months ago and someone is saying "Hey, who generated this data. Can you generate more of it with the recent 6 months of results added in?" I want to help the other teams effectively use version control (my team uses it reasonably well for the code, but every other team classically comes up with many excuses to avoid it). For version controlling a software project where the participants are coders, I have some reasonable understanding of best practices and what to do. But for getting a team of marketing professionals to version control marketing materials and associate metadata about the software used to generate the data for the charts, I'm a bit at a loss. Some of the goals I'd like to achieve: Data that supported a material should never be associated with a person. As in, it should never be the case that someone says "Hey Person XYZ, I see you sent me this data as an attachment 6 months ago, can you update it for me?" Rather, data should be associated with the code and code-version of any code that was used to get it, and perhaps a team of many people who may maintain that code. Then references for data updates are about executing a specific piece of code, with a known version number. I'd like this to be a process that works easily with the tech that the marketing team already uses (e.g. Excel files, Adobe file, whatever). I don't want to burden them with needing to learn a bunch of new stuff just to use version control. They are capable folks, so learning something is fine. Ideally they could use our existing version control framework, but there are some issues around that. I think knowing some general best practices will be enough though, and I can handle patching that into the way our stuff works now. Are there any goals I am failing to think about? What are the time-tested ways to do something like this?

    Read the article

  • T-SQL User-Defined Functions: the good, the bad, and the ugly (part 4)

    - by Hugo Kornelis
    Scalar user-defined functions are bad for performance. I already showed that for T-SQL scalar user-defined functions without and with data access, and for most CLR scalar user-defined functions without data access , and in this blog post I will show that CLR scalar user-defined functions with data access fit into that picture. First attempt Sticking to my simplistic example of finding the triple of an integer value by reading it from a pre-populated lookup table and following the standard recommendations...(read more)

    Read the article

  • What does it mean to treat data as an asset?

    What does it mean to treat data as an asset? When considering this concept, we must define what data is and how it can be considered an asset. Data can easily be defined as a collection of stored truths that are open to interpretation and manipulation.  Expanding on this definition, data can be viewed as a set of captured facts, measurements, and ideas used to make decisions. Furthermore, InvestorsWords.com defines asset as any item of economic value owned by an individual or corporation. Now let’s apply this definition of asset to our definition of data, and ask the following question. Can facts, measurements and ideas be items that are of economic value owned by an individual or corporation? The obvious answer is yes; data can be bought and sold like commodities or analyzed to make smarter business decisions.  We can look at the economic value of data in one of two ways. First, data can be sold as a commodity that can take the form of goods like eBooks, Training, Music, Movies, and so on. Customers are willing to pay to gain access to this data for their consumption. This directly implies that there is an economic value for data in the form of a commodity because customers see a value in obtaining it.  Secondly data can be used in making smarter business decisions that allow for companies to become more profitable and/or reduce their potential for risk in regards to how they operate.  In the past I have worked at companies where we had to analyze previous sales activities in conjunction with current activities to determine how the company was preforming for the quarter.  In addition trends can be formulated based on existing data that allow companies to forecast data so that they can make strategic business decisions based sound forecasted data. Companies that truly value their data are constantly trying to grow and upgrade their data and supporting applications because it is the life blood of a company. If we look at an eBook retailer for example, imagine if they lost all of their data. They would be in essence forced out of business because they would have nothing to sell. In turn, if we look at a company that was using data to facilitate better decision making processes and they lost all of their data then they could be losing potential revenue and/ or increasing the company’s losses by making important business decisions virtually in the dark compared to when they were made on solid data.

    Read the article

  • Change user login in Windows 7 (after a misprint in username)

    - by Artem Russakovskii
    I have an install of Windows 7 that I've already put a few days into. Today I realized I've made a mistake in the username and it's driving me nuts (my personal OCD). While changing the physical folder name is perhaps possible, though quite involved, I do not want to open that can of worms. What I want to do is simply change the username I give when the login prompt shows up. I thought it's possible by just renaming the user account in the User Accounts but that didn't work. Is it possible to do then? Or is the only way to create another user and spend hour migrating everything I'd already customized to that user?

    Read the article

  • Translatability Guidelines for Usability Professionals

    - by ultan o'broin
    There is a clearly a demand for translatability guidelines aimed at usability professionals working in the enterprise applications space, judging by Google Analytics and the interest generated in the Twitterverse by my previous post on the subject. So let's continue the conversation. I'll flesh out each of the original points a bit more in posts over the coming weeks. Bear in mind that large-scale enterprise translation is a process. It needs to be scalable, repeatable, maintainable, and above meet the requirements of automation. That doesn't mean the user experience needs to suffer, however. So, stay tuned for some translatability best practices for usability professionals....

    Read the article

  • "Slave" user accounts in GNU/Linux

    - by Vi
    How to make one user account to be like root for some other user account, e.g. to be able to read, write, chmod all it's files, chown from this account to master and back, kill/ptrace all it's processes and to all thinks root can, but limited only to that particular slave account? Now I'm simulating this by allowing "master" user to "sudo -u slaveuser" and setting setfacl -dRm u:masteruser:rwx ~slaveuser. It is useful as I run most desktop programs in separate user accounts, but need to move files between them sometimes. If it requires some simple kernel patch it is OK.

    Read the article

  • google analytics - real-time user stats vs audience overview user stats

    - by udog
    When looking at the real-time analytics reporting for our app, it shows around 150-180 users, say around 10AM (our peak usage time). When I look at the Audience Overview report for the same day (hourly breakdown), the number of users shown for the 10AM hour is over 1000. I'm sure this has to do with some sort of aggregation, but I would like to know more about how these two numbers are calculated in order to understand it.

    Read the article

  • Data Virtualization: Federated and Hybrid

    - by Krishnamoorthy
    Data becomes useful when it can be leveraged at the right time. Not only enterprises application stores operate on large volume, velocity and variety of data. Mobile and social computing are in the need of operating in foresaid data. Replicating and transferring large swaths of data is one challenge faced in the field of data integration. However, smaller chunks of data aggregated from a variety of sources presents and even more interesting challenge in the industry. Over the past few decades, technology trends focused on best user experience, operating systems, high performance computing, high performance web sites, analysis of warehouse data, service oriented architecture, social computing, cloud computing, and big data. Operating on the ‘dark data’ becomes mandatory in the future technology trend, although, no solution can make dark data useful data in a single day. Useful data can be quantified by the facts of contextual, personalized and on time delivery. In most cases, data from a single source may not be complete the picture. Data has to be combined and computed from various sources, where data may be captured as hybrid data, meaning the combination of structured and unstructured data. Since related data is often found across disparate sources, effectively integrating these sources determines how useful this data ultimately becomes. Technology trends in 2013 are expected to focus on big data and private cloud. Consumers are not merely interested in where data is located or how data is retrieved and computed. Consumers are interested in how quick and how the data can be leveraged. In many cases, data virtualization is the right solution, and is expected to play a foundational role for SOA, Cloud integration, and Big Data. The Oracle Data Integration portfolio includes a data virtualization product called ODSI (Oracle Data Service Integrator). Unlike other data virtualization solutions, ODSI can perform both read and write operations on federated/hybrid data (RDBMS, Webservices,  delimited file and XML). The ODSI Engine is built on XQuery, hence ODSI user can perform computations on data either using XQuery or SQL. Built in data and query caching features, which reduces latency in repetitive calls. Rightly positioning ODSI, can results in a highly scalable model, reducing spend on additional hardware infrastructure.

    Read the article

  • Rails 4, not saving @user.save when registering new user

    - by Yuichi
    When I try to register an user, it does not give me any error but just cannot save the user. I don't have attr_accessible. I'm not sure what I am missing. Please help me. user.rb class User < ActiveRecord::Base has_secure_password validates :email, presence: true, uniqueness: true, format: { with: /\A([^@\s]+)@((?:[-a-z0-9]+\.)+[a-z]{2,})\Z/i } validates :password, presence: true, length: {minimum: 6} validates :nickname, presence: true, uniqueness: true end users_controller.rb class UsersController < ApplicationController def new @user = User.new end def create @user = User.new(user_params) # Not saving @user ... if @user.save flash[:success] = "Successfully registered" redirect_to videos_path else flash[:error] = "Cannot create an user, check the input and try again" render :new end end private def user_params params.require(:user).permit(:email, :password, :nickname) end end Log: Processing by UsersController#create as HTML Parameters: {"utf8"=>"?", "authenticity_token"=>"x5OqMgarqMFj17dVSuA8tVueg1dncS3YtkCfMzMpOUE=", "user"=>{"email"=>"[email protected]", "password"=>"[FILTERED]", "nickname"=>"example"}, "commit"=>"Register"} (0.1ms) begin transaction User Exists (0.2ms) SELECT 1 AS one FROM "users" WHERE "users"."email" = '[email protected]' LIMIT 1 User Exists (0.1ms) SELECT 1 AS one FROM "users" WHERE "users"."nickname" = 'example' LIMIT 1 (0.1ms) rollback transaction

    Read the article

< Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >