Search Results

Search found 39788 results on 1592 pages for 'action method'.

Page 308/1592 | < Previous Page | 304 305 306 307 308 309 310 311 312 313 314 315  | Next Page >

  • How do you find all the links to disavow for a Google reconsideration request? [duplicate]

    - by QF_Developer
    This question already has an answer here: How to identify spammy domains giving backlinks to my site (to submit in disavow links in WMT) 2 answers A few months ago I received the following notification on Google Webmaster for a website I look after. Unnatural links to your site—impacts links Google has detected a pattern of unnatural artificial, deceptive, or manipulative links pointing to pages on this site. Some links may be outside of the webmaster’s control, so for this incident we are taking targeted action on the unnatural links instead of on the site’s ranking as a whole. Learn more. The question here is, should we actively attempt to disavow these links given that the action is seemingly targeted to just a bunch of keywords? I've downloaded the inbound links sample from Google Webmaster and so far I've been through the disavow and reconsideration requests process 6 times, each taking 2-3 weeks only to be supplied just 2 more links that Google don't approve of. At this rate it will take me the rest of my natural life to cleanup all these spammy links! It seems disavowing is futile as they haven't implemented broad actions against the website as a whole and (from what I can gather) have already nullified the value of those offending links. Under the quoted statement above however is a reconsideration request button that seems to imply I should be actively doing something here? UPDATE 14th October -- I have since created a small .NET application that you can feed the CSV sample links file into from Google Webmaster. What this tool does is crawl all the links and looks for specific linking patterns as per some configurable match strings. I realised that many of the links that Google are taking issue with were created by a rogue SEO firm we hired several years ago. All the links are appended with 1 of 5 different descriptions. The application I built uses some regexes to isolate any link sources with these matching appendages and automatically builds the disavow txt file. In the end it had to come down to an algorithm as manually disavowing links on this scale would take weeks! I will post the app here once I've cleaned it up.

    Read the article

  • Making a Case For The Command Line

    - by Jesse Taber
    Originally posted on: http://geekswithblogs.net/GruffCode/archive/2013/06/30/making-a-case-for-the-command-line.aspxI have had an idea percolating in the back of my mind for over a year now that I’ve just recently started to implement. This idea relates to building out “internal tools” to ease the maintenance and on-going support of a software system. The system that I currently work on is (mostly) web-based, so we traditionally we have built these internal tools in the form of pages within the app that are only accessible by our developers and support personnel. These pages allow us to perform tasks within the system that, for one reason or another, we don’t want to let our end users perform (e.g. mass create/update/delete operations on data, flipping switches that turn paid modules of the system on or off, etc). When we try to build new tools like this we often struggle with the level of effort required to build them. Effort Required Creating a whole new page in an existing web application can be a fairly large undertaking. You need to create the page and ensure it will have a layout that is consistent with the other pages in the app. You need to decide what types of input controls need to go onto the page. You need to ensure that everything uses the same style as the rest of the site. You need to figure out what the text on the page should say. Then, when you figure out that you forgot about an input that should really be present you might have to go back and re-work the entire thing. Oh, and in addition to all of that, you still have to, you know, write the code that actually performs the task. Everything other than the code that performs the task at hand is just overhead. We don’t need a fancy date picker control in a nicely styled page for the vast majority of our internal tools. We don’t even really need a page, for that matter. We just need a way to issue a command to the application and have it, in turn, execute the code that we’ve written to accomplish a given task. All we really need is a simple console application! Plumbing Problems A former co-worker of mine, John Sonmez, always advocated the Unix philosophy for building internal tools: start with something that runs at the command line, and then build a UI on top of that if you need to. John’s idea has a lot of merit, and we tried building out some internal tools as simple Console applications. Unfortunately, this was often easier said that done. Doing a “File –> New Project” to build out a tool for a mature system can be pretty daunting because that new project is totally empty.  In our case, the web application code had a lot of of “plumbing” built in: it managed authentication and authorization, it handled database connection management for our multi-tenanted architecture, it managed all of the context that needs to follow a user around the application such as their timezone and regional/language settings. In addition, the configuration file for the web application  (a web.config in our case because this is an ASP .NET application) is large and would need to be reproduced into a similar configuration file for a Console application. While most of these problems are could be solved pretty easily with some refactoring of the codebase, building Console applications for internal tools still potentially suffers from one pretty big drawback: you’d have to execute them on a machine with network access to all of the needed resources. Obviously, our web servers can easily communicate the the database servers and can publish messages to our service bus, but the same is not true for all of our developer and support personnel workstations. We could have everyone run these tools remotely via RDP or SSH, but that’s a bit cumbersome and certainly a lot less convenient than having the tools built into the web application that is so easily accessible. Mix and Match So we need a way to build tools that are easily accessible via the web application but also don’t require the overhead of creating a user interface. This is where my idea comes into play: why not just build a command line interface into the web application? If it’s part of the web application we get all of the plumbing that comes along with that code, and we’re executing everything on the web servers which means we’ll have access to any external resources that we might need. Rather than having to incur the overhead of creating a brand new page for each tool that we want to build, we can create one new page that simply accepts a command in text form and executes it as a request on the web server. In this way, we can focus on writing the code to accomplish the task. If the tool ends up being heavily used, then (and only then) should we consider spending the time to build a better user experience around it. To be clear, I’m not trying to downplay the importance of building great user experiences into your system; we should all strive to provide the best UX possible to our end users. I’m only advocating this sort of bare-bones interface for internal consumption by the technical staff that builds and supports the software. This command line interface should be the “back end” to a highly polished and eye-pleasing public face. Implementation As I mentioned at the beginning of this post, this is an idea that I’ve had for awhile but have only recently started building out. I’ve outlined some general guidelines and design goals for this effort as follows: Text in, text out: In the interest of keeping things as simple as possible, I want this interface to be purely text-based. Users will submit commands as plain text, and the application will provide responses in plain text. Obviously this text will be “wrapped” within the context of HTTP requests and responses, but I don’t want to have to think about HTML or CSS when taking input from the user or displaying responses back to the user. Task-oriented code only: After building the initial “harness” for this interface, the only code that should need to be written to create a new internal tool should be code that is expressly needed to accomplish the task that the tool is intended to support. If we want to encourage and enable ourselves to build good tooling, we need to lower the barriers to entry as much as possible. Built-in documentation: One of the great things about most command line utilities is the ‘help’ switch that provides usage guidelines and details about the arguments that the utility accepts. Our web-based command line utility should allow us to build the documentation for these tools directly into the code of the tools themselves. I finally started trying to implement this idea when I heard about a fantastic open-source library called CLAP (Command Line Auto Parser) that lets me meet the guidelines outlined above. CLAP lets you define classes with public methods that can be easily invoked from the command line. Here’s a quick example of the code that would be needed to create a new tool to do something within your system: 1: public class CustomerTools 2: { 3: [Verb] 4: public void UpdateName(int customerId, string firstName, string lastName) 5: { 6: //invoke internal services/domain objects/hwatever to perform update 7: } 8: } This is just a regular class with a single public method (though you could have as many methods as you want). The method is decorated with the ‘Verb’ attribute that tells the CLAP library that it is a method that can be invoked from the command line. Here is how you would invoke that code: Parser.Run(args, new CustomerTools()); Note that ‘args’ is just a string[] that would normally be passed passed in from the static Main method of a Console application. Also, CLAP allows you to pass in multiple classes that define [Verb] methods so you can opt to organize the code that CLAP will invoke in any way that you like. You can invoke this code from a command line application like this: SomeExe UpdateName -customerId:123 -firstName:Jesse -lastName:Taber ‘SomeExe’ in this example just represents the name of .exe that is would be created from our Console application. CLAP then interprets the arguments passed in order to find the method that should be invoked and automatically parses out the parameters that need to be passed in. After a quick spike, I’ve found that invoking the ‘Parser’ class can be done from within the context of a web application just as easily as it can from within the ‘Main’ method entry point of a Console application. There are, however, a few sticking points that I’m working around: Splitting arguments into the ‘args’ array like the command line: When you invoke a standard .NET console application you get the arguments that were passed in by the user split into a handy array (this is the ‘args’ parameter referenced above). Generally speaking they get split by whitespace, but it’s also clever enough to handle things like ignoring whitespace in a phrase that is surrounded by quotes. We’ll need to re-create this logic within our web application so that we can give the ‘args’ value to CLAP just like a console application would. Providing a response to the user: If you were writing a console application, you might just use Console.WriteLine to provide responses to the user as to the progress and eventual outcome of the command. We can’t use Console.WriteLine within a web application, so I’ll need to find another way to provide feedback to the user. Preferably this approach would allow me to use the same handler classes from both a Console application and a web application, so some kind of strategy pattern will likely emerge from this effort. Submitting files: Often an internal tool needs to support doing some kind of operation in bulk, and the easiest way to submit the data needed to support the bulk operation is in a file. Getting the file uploaded and available to the CLAP handler classes will take a little bit of effort. Mimicking the console experience: This isn’t really a requirement so much as a “nice to have”. To start out, the command-line interface in the web application will probably be a single ‘textarea’ control with a button to submit the contents to a handler that will pass it along to CLAP to be parsed and run. I think it would be interesting to use some javascript and CSS trickery to change that page into something with more of a “shell” interface look and feel. I’ll be blogging more about this effort in the future and will include some code snippets (or maybe even a full blown example app) as I progress. I also think that I’ll probably end up either submitting some pull requests to the CLAP project or possibly forking/wrapping it into a more web-friendly package and open sourcing that.

    Read the article

  • How to read Scala code with lots of implicits?

    - by Petr Pudlák
    Consider the following code fragment (adapted from http://stackoverflow.com/a/12265946/1333025): // Using scalaz 6 import scalaz._, Scalaz._ object Example extends App { case class Container(i: Int) def compute(s: String): State[Container, Int] = state { case Container(i) => (Container(i + 1), s.toInt + i) } val d = List("1", "2", "3") type ContainerState[X] = State[Container, X] println( d.traverse[ContainerState, Int](compute) ! Container(0) ) } I understand what it does on high level. But I wanted to trace what exactly happens during the call to d.traverse at the end. Clearly, List doesn't have traverse, so it must be implicitly converted to another type that does. Even though I spent a considerable amount of time trying to find out, I wasn't very successful. First I found that there is a method in scalaz.Traversable traverse[F[_], A, B] (f: (A) => F[B], t: T[A])(implicit arg0: Applicative[F]): F[T[B]] but clearly this is not it (although it's most likely that "my" traverse is implemented using this one). After a lot of searching, I grepped scalaz source codes and I found scalaz.MA's method traverse[F[_], B] (f: (A) => F[B])(implicit a: Applicative[F], t: Traverse[M]): F[M[B]] which seems to be very close. Still I'm missing to what List is converted in my example and if it uses MA.traverse or something else. The question is: What procedure should I follow to find out what exactly is called at d.traverse? Having even such a simple code that is so hard analyze seems to me like a big problem. Am I missing something very simple? How should I proceed when I want to understand such code that uses a lot of imported implicits? Is there some way to ask the compiler what implicits it used? Or is there something like Hoogle for Scala so that I can search for a method just by its name?

    Read the article

  • ASP.NET MVC Controllers & Actions In Regards To URLs And SEO

    - by user1066133
    The general idea is that if I were to create an MVC site, simple pages such as the contact and about pages will be placed under the Home Controller. So my URL would look like http://www.mysite.com/home/contact, and http://www.mysite.com/home/about. The above works just fine, but I really don't like the idea of having the "home" portion in the URL. So what negatives would there be if I decided to make a controller name of Contact and About and just added a single Index action so that way the URL would be simplified to http://www.mysite.com/contact and http://www.mysite.com/about. This method looks cleaner. Do any of you do the same or something similar? I've been trying to work on SEO for an escort service site I've developed and when you search for the females the link looks like http://www.mysite.com/escorts/female-escorts, and like-wise for males. I'm wondering if I should remove the Escorts Controller and just create a Female_Escorts Controller with an Index Action only so it comes out like the above as http://www.mysite.com/female-escorts.

    Read the article

  • String.IsNullOrWhiteSpace

    - by Scott Dorman
    An empty string is different than an unassigned string variable (which is null), and is a string containing no characters between the quotes (""). The .NET Framework provides String.Empty to represent an empty string, and there is no practical difference between ("") and String.Empty. One of the most common string comparisons to perform is to determine if a string variable is equal to an empty string. The fastest and simplest way to determine if a string is empty is to test if the Length property is equal to 0. However, since strings are reference types it is possible for a string variable to be null, which would result in a runtime error when you tried to access the Length property. Since testing to determine if a string is empty is such a common occurrence, the .NET Framework provides the static method String.IsNullOrEmpty method: public static bool IsNullOrEmpty(string value) { if (value != null) { return (value.Length == 0); }   return true; } It is also very common to determine if a string is empty and contains more than just whitespace characters. For example, String.IsNullOrEmpty("   ") would return false, since this string is actually made up of three whitespace characters. In some cases, this may be acceptable, but in many others it is not. TO help simplify testing this scenario, the .NET Framework 4 introduces the String.IsNullOrWhiteSpace method: public static bool IsNullOrWhiteSpace(string value) { if (value != null) { for (int i = 0; i < value.Length; i++) { if (!char.IsWhiteSpace(value[i])) { return false; } } } return true; }   Using either String.IsNullOrEmpty or String.IsNullOrWhiteSpace helps ensure correctness, readability, and consistency, so they should be used in all situations where you need to determine if a string is null, empty, or contains only whitespace characters. Technorati Tags: .NET,C# 4

    Read the article

  • VSDB to SSDT part 3 : command-line deployment with SqlPackage.exe, replacement for Vsdbcmd.exe

    - by Etienne Giust
    For our continuous integration needs, we use a powershell script to handle deployment. A simpler approach would be to have a deployment task embedded within the build process. See the solution provided here by Jakob Ehn (a most interesting read which also dives into the '”deploying from Visual Studio” specifics) : http://geekswithblogs.net/jakob/archive/2012/04/25/deploying-ssdt-projects-with-tfs-build.aspx   For our needs, though, clearly separating our build phase from our deployment phase is important. It allows us to instantly deploy old versions. Also it is more convenient for continuous integration. So we stick with the powershell script approach. With VSDB projects, that script used to call the following command (the vsdbcmd executable was locally available, along with needed libraries): vsdbcmd.exe /a:Deploy /dd /cs:<CONNECTIONSTRING TO TARGET DB> /dsp:SQL /manifest:< PATH TO .deploymanifest FILE>   To be able to do the approximately same thing with a SSDT produced file (dacpac), you would call this command on a machine which has VS2012 installed (or the SSDT installed, see here : http://msdn.microsoft.com/en-us/library/hh500335%28v=vs.103%29):   C:\Program Files (x86)\Microsoft SQL Server\110\DAC\bin\SqlPackage.exe /Action:Publish /SourceFile:<PATH TO Database.dacpac FILE> /Profile:<PATH TO .publish.xml FILE>   And from within a powershell script :   & "C:\Program Files (x86)\Microsoft SQL Server\110\DAC\bin\SqlPackage.exe" /Action:Publish /SourceFile:<PATH TO Database.dacpac FILE> /Profile:<PATH TO .publish.xml FILE>   The command will consume a publish.xml file where the connection string and the deployment options are specified. You must be familiar with it if you have done some deployments from visual studio. If not, please refer to the above mentioned article by Jakob Ehn.   It is also possible to pass those parameters in the command line. The complete SqlPackage.exe syntax is detailed here : http://msdn.microsoft.com/en-us/library/hh550080%28v=vs.103%29.aspx

    Read the article

  • DC Comics Identifies Krypton on the Star Map

    - by Jason Fitzpatrick
    This week Action Comics Superman #14 hits the stands and DC comics reveals the actual location of Kyrpton, delivered by none other than beloved astrophysicist Neil Tyson. Phil Plait at Bad Astronomy reports on the resolution of fans’ long standing curiosity about the location of Krypton: Well, that’s about to change. DC comics is releasing a new book this week – Action Comics Superman #14 – that finally reveals the answer to this stellar question. And they picked a special guest to reveal it: my old friend Neil Tyson. Actually, Neil did more than just appear in the comic: he was approached by DC to find a good star to fit the story. Red supergiants don’t work; they explode as supernovae when they are too young to have an advanced civilization rise on any orbiting planets. Red giants aren’t a great fit either; they can be old, but none is at the right distance to match the storyline. It would have to be a red dwarf: there are lots of them, they can be very old, and some are close enough to fit the plot. I won’t keep you in suspense: the star is LHS 2520, a red dwarf in the southern constellation of Corvus (at the center of the picture here). It’s an M3.5 dwarf, meaning it has about a quarter of the Sun’s mass, a third its diameter, roughly half the Sun’s temperature, and a luminosity of a mere 1% of our Sun’s. It’s only 27 light years away – very close on the scale of the galaxy – but such a dim bulb you need a telescope to see it at all (for any astronomers out there, the coordinates are RA: 12h 10m 5.77s, Dec: -15° 4m 17.9 s). 6 Ways Windows 8 Is More Secure Than Windows 7 HTG Explains: Why It’s Good That Your Computer’s RAM Is Full 10 Awesome Improvements For Desktop Users in Windows 8

    Read the article

  • Manage SQL Server Connectivity through Windows Azure Virtual Machines Remote PowerShell

    - by SQLOS Team
    Manage SQL Server Connectivity through Windows Azure Virtual Machines Remote PowerShell Blog This blog post comes from Khalid Mouss, Senior Program Manager in Microsoft SQL Server. Overview The goal of this blog is to demonstrate how we can automate through PowerShell connecting multiple SQL Server deployments in Windows Azure Virtual Machines. We would configure TCP port that we would open (and close) though Windows firewall from a remote PowerShell session to the Virtual Machine (VM). This will demonstrate how to take the advantage of the remote PowerShell support in Windows Azure Virtual Machines to automate the steps required to connect SQL Server in the same cloud service and in different cloud services.  Scenario 1: VMs connected through the same Cloud Service 2 Virtual machines configured in the same cloud service. Both VMs running different SQL Server instances on them. Both VMs configured with remote PowerShell turned on to be able to run PS and other commands directly into them remotely in order to re-configure them to allow incoming SQL connections from a remote VM or on premise machine(s). Note: RDP (Remote Desktop Protocol) is kept configured in both VMs by default to be able to remote connect to them and check the connections to SQL instances for demo purposes only; but not actually required. Step 1 – Provision VMs and Configure Ports   Provision VM1; named DemoVM1 as follows (see examples screenshots below if using the portal):   Provision VM2 (DemoVM2) with PowerShell Remoting enabled and connected to DemoVM1 above (see examples screenshots below if using the portal): After provisioning of the 2 VMs above, here is the default port configurations for example: Step2 – Verify / Confirm the TCP port used by the database Engine By the default, the port will be configured to be 1433 – this can be changed to a different port number if desired.   1. RDP to each of the VMs created below – this will also ensure the VMs complete SysPrep(ing) and complete configuration 2. Go to SQL Server Configuration Manager -> SQL Server Network Configuration -> Protocols for <SQL instance> -> TCP/IP - > IP Addresses   3. Confirm the port number used by SQL Server Engine; in this case 1433 4. Update from Windows Authentication to Mixed mode   5.       Restart SQL Server service for the change to take effect 6.       Repeat steps 3., 4., and 5. For the second VM: DemoVM2 Step 3 – Remote Powershell to DemoVM1 Enter-PSSession -ComputerName condemo.cloudapp.net -Port 61503 -Credential <username> -UseSSL -SessionOption (New-PSSessionOption -SkipCACheck -SkipCNCheck) Your will then be prompted to enter the password. Step 4 – Open 1433 port in the Windows firewall netsh advfirewall firewall add rule name="DemoVM1Port" dir=in localport=1433 protocol=TCP action=allow Output: netsh advfirewall firewall show rule name=DemoVM1Port Rule Name:                            DemoVM1Port ---------------------------------------------------------------------- Enabled:                              Yes Direction:                            In Profiles:                             Domain,Private,Public Grouping:                             LocalIP:                              Any RemoteIP:                             Any Protocol:                             TCP LocalPort:                            1433 RemotePort:                           Any Edge traversal:                       No Action:                               Allow Ok. Step 5 – Now connect from DemoVM2 to DB instance in DemoVM1 Step 6 – Close port 1433 in the Windows firewall netsh advfirewall firewall delete rule name=DemoVM1Port Output: Deleted 1 rule(s). Ok. netsh advfirewall firewall show  rule name=DemoVM1Port No rules match the specified criteria.   Step 7 – Try to connect from DemoVM2 to DB Instance in DemoVM1  Because port 1433 has been closed (in step 6) in the Windows Firewall in VM1 machine, we can longer connect from VM3 remotely to VM1. Scenario 2: VMs provisioned in different Cloud Services 2 Virtual machines configured in different cloud services. Both VMs running different SQL Server instances on them. Both VMs configured with remote PowerShell turned on to be able to run PS and other commands directly into them remotely in order to re-configure them to allow incoming SQL connections from a remote VM or on on-premise machine(s). Note: RDP (Remote Desktop Protocol) is kept configured in both VMs by default to be able to remote connect to them and check the connections to SQL instances for demo purposes only; but not actually needed. Step 1 – Provision new VM3 Provision VM3; named DemoVM3 as follows (see examples screenshots below if using the portal): After provisioning is complete, here is the default port configurations: Step 2 – Add public port to VM1 connect to from VM3’s DB instance Since VM3 and VM1 are not connected in the same cloud service, we will need to specify the full DNS address while connecting between the machines which includes the public port. We shall add a public port 57000 in this case that is linked to private port 1433 which will be used later to connect to the DB instance. Step 3 – Remote Powershell to DemoVM1 Enter-PSSession -ComputerName condemo.cloudapp.net -Port 61503 -Credential <UserName> -UseSSL -SessionOption (New-PSSessionOption -SkipCACheck -SkipCNCheck) You will then be prompted to enter the password.   Step 4 – Open 1433 port in the Windows firewall netsh advfirewall firewall add rule name="DemoVM1Port" dir=in localport=1433 protocol=TCP action=allow Output: Ok. netsh advfirewall firewall show rule name=DemoVM1Port Rule Name:                            DemoVM1Port ---------------------------------------------------------------------- Enabled:                              Yes Direction:                            In Profiles:                             Domain,Private,Public Grouping:                             LocalIP:                              Any RemoteIP:                             Any Protocol:                             TCP LocalPort:                            1433 RemotePort:                           Any Edge traversal:                       No Action:                               Allow Ok.   Step 5 – Now connect from DemoVM3 to DB instance in DemoVM1 RDP into VM3, launch SSM and Connect to VM1’s DB instance as follows. You must specify the full server name using the DNS address and public port number configured above. Step 6 – Close port 1433 in the Windows firewall netsh advfirewall firewall delete rule name=DemoVM1Port   Output: Deleted 1 rule(s). Ok. netsh advfirewall firewall show  rule name=DemoVM1Port No rules match the specified criteria.  Step 7 – Try to connect from DemoVM2 to DB Instance in DemoVM1  Because port 1433 has been closed (in step 6) in the Windows Firewall in VM1 machine, we can no longer connect from VM3 remotely to VM1. Conclusion Through the new support for remote PowerShell in Windows Azure Virtual Machines, one can script and automate many Virtual Machine and SQL management tasks. In this blog, we have demonstrated, how to start a remote PowerShell session, re-configure Virtual Machine firewall to allow (or disallow) SQL Server connections. References SQL Server in Windows Azure Virtual Machines   Originally posted at http://blogs.msdn.com/b/sqlosteam/

    Read the article

  • Game Components, Game Managers and Object Properties

    - by George Duckett
    I'm trying to get my head around component based entity design. My first step was to create various components that could be added to an object. For every component type i had a manager, which would call every component's update function, passing in things like keyboard state etc. as required. The next thing i did was remove the object, and just have each component with an Id. So an object is defined by components having the same Ids. Now, i'm thinking that i don't need a manager for all my components, for example i have a SizeComponent, which just has a Size property). As a result the SizeComponent doesn't have an update method, and the manager's update method does nothing. My first thought was to have an ObjectProperty class which components could query, instead of having them as properties of components. So an object would have a number of ObjectProperty and ObjectComponent. Components would have update logic that queries the object for properties. The manager would manage calling the component's update method. This seems like over-engineering to me, but i don't think i can get rid of the components, because i need a way for the managers to know what objects need what component logic to run (otherwise i'd just remove the component completely and push its update logic into the manager). Is this (having ObjectProperty, ObjectComponent and ComponentManager classes) over-engineering? What would be a good alternative?

    Read the article

  • Enum.HasFlag

    - by Scott Dorman
    An enumerated type, also called an enumeration (or just an enum for short), is simply a way to create a numeric type restricted to a predetermined set of valid values with meaningful names for those values. While most enumerations represent discrete values, or well-known combinations of those values, sometimes you want to combine values in an arbitrary fashion. These enumerations are known as flags enumerations because the values represent flags which can be set or unset. To combine multiple enumeration values, you use the logical OR operator. For example, consider the following: public enum FileAccess { None = 0, Read = 1, Write = 2, }   class Program { static void Main(string[] args) { FileAccess access = FileAccess.Read | FileAccess.Write; Console.WriteLine(access); } } The output of this simple console application is: The value 3 is the numeric value associated with the combination of FileAccess.Read and FileAccess.Write. Clearly, this isn’t the best representation. What you really want is for the output to look like: To achieve this result, you simply add the Flags attribute to the enumeration. The Flags attribute changes how the string representation of the enumeration value is displayed when using the ToString() method. Although the .NET Framework does not require it, enumerations that will be used to represent flags should be decorated with the Flags attribute since it provides a clear indication of intent. One “problem” with Flags enumerations is determining when a particular flag is set. The code to do this isn’t particularly difficult, but unless you use it regularly it can be easy to forget. To test if the access variable has the FileAccess.Read flag set, you would use the following code: (access & FileAccess.Read) == FileAccess.Read Starting with .NET 4, a HasFlag static method has been added to the Enum class which allows you to easily perform these tests: access.HasFlag(FileAccess.Read) This method follows one of the “themes” for the .NET Framework 4, which is to simplify and reduce the amount of boilerplate code like this you must write. Technorati Tags: .NET,C# 4

    Read the article

  • Change Password vs. Reset Password-Week 42

    - by OWScott
    You can find this week’s video here. The differences between change password and reset password are not well known. This week's video walks through the differences and shows them in action. Tune in to find out more about password management. It wasn’t until fairly recently that I realized that there is a difference between a change password and a reset password. One is safe, while the other not so much. I remember when Windows Server 2003 was first released and resetting a user’s password had a distinct warning about irreversible loss of information. I wondered why it wasn’t mentioned in previous operating systems, but I also wondered if it was true since I never personally noticed any impact. It wasn’t until about a year ago when I really dug in to understand this topic better. This week’s lesson covers the differences between a change password and a reset password. In this video we also take a look at it in action so that we have a solid understanding of the topic, and briefly discuss how it works for programming APIs too. This is now week 42 of a 52 week series for the web pro. You can view past and future weeks here: http://dotnetslackers.com/projects/LearnIIS7/ You can find this week’s video here.

    Read the article

  • SQL SERVER – Get Free Books on While Learning SQL Server 2012 Error Handling

    - by pinaldave
    Fans of this blog are aware that I have recently released my new books SQL Server Functions and SQL Server 2012 Queries. The books are available in market in limited edition but you can avail them for free on Wednesday Nov 14, 2012. Not only they are free but you can additionally learn SQL Server 2012 Error Handling as well. My book’s co-author Rick Morelan is presenting a webinar tomorrow on SQL Server 2012 Error Handling. Here is the brief abstract of the webinar: People are often shocked when they see the demo in this talk where the first statement fails and all other statements still commit. For example, did you know that BEGIN TRAN…COMMIT TRAN is not enough to make everything work together? These mistakes can still happen to you in SQL Server 2012 if you are not aware of the options. Rick Morelan, creator of Joes2Pros, will teach you how to predict the Error Action and control it with & without structured error handling. Register for the webinar now to learn: How to predict the Error Action and control it Nuances between successful and failing SQL statements Essential SQL Server 2012 configuration options Register for the Webinar and be present during the webinar. My co-author will announce a winner (may be more than 1 winner) during the session. If you are present during the session – you are eligible to win the book. The webinar is scheduled for 2 different times to accommodate various time zones. 1) 10am ET/7am PT 2) 1pm ET/11am PT. Each webinar will have their own winner. You can increase your chances by attending both the webinars. Do not miss this opportunity and register for the webinar right now. The recordings of the webinar may not be available. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Joes 2 Pros, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, SQLServer, T SQL, Technology

    Read the article

  • How do I animate a spritesheet in cocos2d Android

    - by user729053
    I'm trying to animate a sprite in a spritesheet, the goal is to make the character walk from left to right. I subclassed CCSprite: CharacterSprite. this is my code: CCSpriteFrameCache.sharedSpriteFrameCache().addSpriteFrames("WalkAnimation.plist"); CCSpriteSheet spriteSheet = CCSpriteSheet.spriteSheet("WalkAnimation.png"); w.addChild(spriteSheet); ArrayList<CCSpriteFrame> walkSprites = new ArrayList<CCSpriteFrame>(); for(int i = 1; i<=8;++i) { walkSprites.add(CCSpriteFrameCache.sharedSpriteFrameCache().spriteFrameByName("walk"+ i +".png")); } //float randomFactor = (float)Math.random()*10; CCAnimation walkAnimation = CCAnimation.animation("walk", 0.1f, walkSprites); Log.v("addEnemy", "Show"); this.character = CCSprite.sprite(walkSprites.get(0)); Log.v("addEnemy", "Don't Show"); this.walkAction = CCRepeatForever.action(CCAnimate.action(walkAnimation, false)); character.runAction(walkAction); for(int k =0; k<amount; k++) { float randomFactor = (float)Math.random()*10; character.setPosition(CGPoint.ccp(-randomFactor * character.getContentSize().width/2, winSize.height / 6.0f)); spriteSheet.addChild(character); } This code is in my 2nd scene. And whenever I press on Start Game, nothing happens. But the code is executed till: character = CharacterSprite.sprite("walk1.png"); Can someone provide me of a snippet of animation using a spritesheet?

    Read the article

  • Stack Trace Logger [migrated]

    - by Chris Okyen
    I need to write a parent Java class that classes using recursion can extend. The parent class will be be able to realize whenever the call stack changes ( you enter a method, temporarily leave it to go to another method call, or you are are finsihed with the method ) and then print it out. I want it to print on the console, but clear the console as well every time so it shows the stack horizantaly so you can see the height of each stack to see what popped off and what popped on... Also print out if a baseline was reached for recursive functions. First. How can I using the StackTraceElements and Thread classes to detect automatically whenever the stack has popped or pushed an element on without calling it manually? Second, how would I do the clearing thing? For instance , if I had the code: public class recursion(int i) { private static void recursion(int i) { if( i < 10) System.out.println('A'); else { recursion(i / 10 ); System.out.println('B'); } } public static void main(String[] argv) { recursion(102); } } It would need to print out the stack when entering main(), when entering recursion(102) from main(), when it enters recursion(102 / 10), which is recursion(10), from recursion(102), when it enters recursion(10 / 10), which is recursion(1) from recursion(10). Print out a message out when it reaches the baseline recursion(1).. then print out the stacks of reversed revisitation of function recursion(10), recursion(102) and main(). finally print out we are exiting main().

    Read the article

  • Clientside anticheating in multiplayer game 1vs1

    - by garnav
    I'm developing a simple card game, where there will be a matchmaking system that will put you against another human player. This will be the only game mode available, a 1vs1 against another human, no AI. I want to prevent cheating as much as possible. I have already read a lot of similar questions here and I already know that I cannot trust the client and I have to make all verifications server side. I intend to have a server (need one for the matchmaking anyway) and I intend to make some verifications server side but if I want to check everything server side this makes my server to be able to keep track of the state of all current games and check every action, and I don't have the money/infrastructure to support that server. My idea is to make clients check and verify some of the actions made by their opponent* and if they find some illegal action notify the possible cheating to the server and make the server verify it. This will still require my server to keep track of all current games, but it will save resources only checking some things that cannot be checked at client side(like card order in the deck) and only checking other things when they are actually wrong. *(only those they can check with out allowing themselves cheating! for example:they can't check if the played card was in hand cos that will need them to know all cards in hand) Summing up, my questions are: is this a viable approach? will I actually save resources doing this or the extra complexity in the server and client for exchanging this messages is not worth it? do you know any game that has successfully or unsuccessfully tried a similar approach? Thanks all for reading and answering

    Read the article

  • Where'd My Data Go? (and/or...How Do I Get Rid of It?)

    - by David Paquette
    Want to get a better idea of how cascade deletes work in Entity Framework Code First scenarios? Want to see it in action? Stick with us as we quickly demystify what happens when you tell your data context to nuke a parent entity. This post is authored by Calgary .NET User Group Leader David Paquette with help from Microsoft MVP in Asp.Net James Chambers. We got to spend a great week back in March at Prairie Dev Con West, chalk full of sessions, presentations, workshops, conversations and, of course, questions.  One of the questions that came up during my session: "How does Entity Framework Code First deal with cascading deletes?". James and I had different thoughts on what the default was, if it was different from SQL server, if it was the same as EF proper and if there was a way to override whatever the default was.  So we built a set of examples and figured out that the answer is simple: it depends.  (Download Samples) Consider the example of a hockey league. You have several different entities in the league including games, teams that play the games and players that make up the teams. Each team also has a mascot.  If you delete a team, we need a couple of things to happen: The team, games and mascot will be deleted, and The players for that team will remain in the league (and therefore the database) but they should no longer be assigned to a team. So, let's make this start to come together with a look at the default behaviour in SQL when using an EDMX-driven project. The Reference – Understanding EF's Behaviour with an EDMX/DB First Approach First up let’s take a look at the DB first approach.  In the database, we defined 4 tables: Teams, Players, Mascots, and Games.  We also defined 4 foreign keys as follows: Players.Team_Id (NULL) –> Teams.Id Mascots.Id (NOT NULL) –> Teams.Id (ON DELETE CASCADE) Games.HomeTeam_Id (NOT NULL) –> Teams.Id Games.AwayTeam_Id (NOT NULL) –> Teams.Id Note that by specifying ON DELETE CASCADE for the Mascots –> Teams foreign key, the database will automatically delete the team’s mascot when the team is deleted.  While we want the same behaviour for the Games –> Teams foreign keys, it is not possible to accomplish this using ON DELETE CASCADE in SQL Server.  Specifying a ON DELETE CASCADE on these foreign keys would cause a circular reference error: The series of cascading referential actions triggered by a single DELETE or UPDATE must form a tree that contains no circular references. No table can appear more than one time in the list of all cascading referential actions that result from the DELETE or UPDATE – MSDN When we create an entity data model from the above database, we get the following:   In order to get the Games to be deleted when the Team is deleted, we need to specify End1 OnDelete action of Cascade for the HomeGames and AwayGames associations.   Now, we have an Entity Data Model that accomplishes what we set out to do.  One caveat here is that Entity Framework will only properly handle the cascading delete when the the players and games for the team have been loaded into memory.  For a more detailed look at Cascade Delete in EF Database First, take a look at this blog post by Alex James.   Building The Same Sample with EF Code First Next, we're going to build up the model with the code first approach.  EF Code First is defined on the Ado.Net team blog as such: Code First allows you to define your model using C# or VB.Net classes, optionally additional configuration can be performed using attributes on your classes and properties or by using a Fluent API. Your model can be used to generate a database schema or to map to an existing database. Entity Framework Code First follows some conventions to determine when to cascade delete on a relationship.  More details can be found on MSDN: If a foreign key on the dependent entity is not nullable, then Code First sets cascade delete on the relationship. If a foreign key on the dependent entity is nullable, Code First does not set cascade delete on the relationship, and when the principal is deleted the foreign key will be set to null. The multiplicity and cascade delete behavior detected by convention can be overridden by using the fluent API. For more information, see Configuring Relationships with Fluent API (Code First). Our DbContext consists of 4 DbSets: public DbSet<Team> Teams { get; set; } public DbSet<Player> Players { get; set; } public DbSet<Mascot> Mascots { get; set; } public DbSet<Game> Games { get; set; } When we set the Mascot –> Team relationship to required, Entity Framework will automatically delete the Mascot when the Team is deleted.  This can be done either using the [Required] data annotation attribute, or by overriding the OnModelCreating method of your DbContext and using the fluent API. Data Annotations: public class Mascot { public int Id { get; set; } public string Name { get; set; } [Required] public virtual Team Team { get; set; } } Fluent API: protected override void OnModelCreating(DbModelBuilder modelBuilder) { modelBuilder.Entity<Mascot>().HasRequired(m => m.Team); } The Player –> Team relationship is automatically handled by the Code First conventions. When a Team is deleted, the Team property for all the players on that team will be set to null.  No additional configuration is required, however all the Player entities must be loaded into memory for the cascading to work properly. The Game –> Team relationship causes some grief in our Code First example.  If we try setting the HomeTeam and AwayTeam relationships to required, Entity Framework will attempt to set On Cascade Delete for the HomeTeam and AwayTeam foreign keys when creating the database tables.  As we saw in the database first example, this causes a circular reference error and throws the following SqlException: Introducing FOREIGN KEY constraint 'FK_Games_Teams_AwayTeam_Id' on table 'Games' may cause cycles or multiple cascade paths. Specify ON DELETE NO ACTION or ON UPDATE NO ACTION, or modify other FOREIGN KEY constraints. Could not create constraint. To solve this problem, we need to disable the default cascade delete behaviour using the fluent API: protected override void OnModelCreating(DbModelBuilder modelBuilder) { modelBuilder.Entity<Mascot>().HasRequired(m => m.Team); modelBuilder.Entity<Team>() .HasMany(t => t.HomeGames) .WithRequired(g => g.HomeTeam) .WillCascadeOnDelete(false); modelBuilder.Entity<Team>() .HasMany(t => t.AwayGames) .WithRequired(g => g.AwayTeam) .WillCascadeOnDelete(false); base.OnModelCreating(modelBuilder); } Unfortunately, this means we need to manually manage the cascade delete behaviour.  When a Team is deleted, we need to manually delete all the home and away Games for that Team. foreach (Game awayGame in jets.AwayGames.ToArray()) { entities.Games.Remove(awayGame); } foreach (Game homeGame in homeGames) { entities.Games.Remove(homeGame); } entities.Teams.Remove(jets); entities.SaveChanges();   Overriding the Defaults – When and How To As you have seen, the default behaviour of Entity Framework Code First can be overridden using the fluent API.  This can be done by overriding the OnModelCreating method of your DbContext, or by creating separate model override files for each entity.  More information is available on MSDN.   Going Further These were simple examples but they helped us illustrate a couple of points. First of all, we were able to demonstrate the default behaviour of Entity Framework when dealing with cascading deletes, specifically how entity relationships affect the outcome. Secondly, we showed you how to modify the code and control the behaviour to get the outcome you're looking for. Finally, we showed you how easy it is to explore this kind of thing, and we're hoping that you get a chance to experiment even further. For example, did you know that: Entity Framework Code First also works seamlessly with SQL Azure (MSDN) Database creation defaults can be overridden using a variety of IDatabaseInitializers  (Understanding Database Initializers) You can use Code Based migrations to manage database upgrades as your model continues to evolve (MSDN) Next Steps There's no time like the present to start the learning, so here's what you need to do: Get up-to-date in Visual Studio 2010 (VS2010 | SP1) or Visual Studio 2012 (VS2012) Build yourself a project to try these concepts out (or download the sample project) Get into the community and ask questions! There are a ton of great resources out there and community members willing to help you out (like these two guys!). Good luck! About the Authors David Paquette works as a lead developer at P2 Energy Solutions in Calgary, Alberta where he builds commercial software products for the energy industry.  Outside of work, David enjoys outdoor camping, fishing, and skiing. David is also active in the software community giving presentations both locally and at conferences. David also serves as the President of Calgary .Net User Group. James Chambers crafts software awesomeness with an incredible team at LogiSense Corp, based in Cambridge, Ontario. A husband, father and humanitarian, he is currently residing in the province of Manitoba where he resists the urge to cheer for the Jets and maintains he allegiance to the Calgary Flames. When he's not active with the family, outdoors or volunteering, you can find James speaking at conferences and user groups across the country about web development and related technologies.

    Read the article

  • XNA 4.0: Problem with loading XML content files

    - by 200
    I'm new to XNA so hopefully my question isn't too silly. I'm trying to load content data from XML. My XML file is placed in my Content project (the new XNA 4 thing), called "Event1.xml": <?xml version="1.0" encoding="utf-8" ?> <XnaContent> <Asset Type="MyNamespace.Event"> // error on this line <name>1</name> </Asset> </XnaContent> My "Event" class is placed in my main project: using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Xml.Linq; namespace MyNamespace { public class Event { public string name; } } The XML file is being called by my main game class inside the LoadContent() method: Event temp = Content.Load<Event>("Event1"); And this is the error I'm getting: There was an error while deserializing intermediate XML. Cannot find type "MyNamespace.Event" I think the problem is because at the time the XML is being evaluated, the Event class has not been established because one file is in my main project while the other is in my Content project (being a XNA 4.0 project and such). I have also tried changing the build action of the xml file from compile to content; however the program would then not be able to find the file and would give this other warning: Project item 'Event1.xml' was not built with the XNA Framework Content Pipeline. Set its Build Action property to Compile to build it. Any suggestions are appreciated. Thanks.

    Read the article

  • How can I email a vCard to users who are unable to download it?

    - by Zachary Lewis
    I have created vCards for the people in my business, and they are great for users on standard browsers; however, users browsing on a mobile device (notably iPhone) are unable to download and view my vCard. Is there a service that I can direct them to that will allow them to receive an email containing my vCard, or is there a simple way I can set this up myself? I am running my site on WordPress, and initial attempts have failed spectacularly. I'd like for them to be given the option to perform either action, but have the predominant action more prominently visible (probably via user agent detection). Something along the lines of: It looks like you're on an iPhone! It's a bummer they can't download vCards, but if you enter your email address, we'll wrap one up and send it your way! Don't worry, we won't send you junk email. Heck, we don't even save your email address! [email protected] Think you've got it all figured out? Fine, download the vCard instead! If you know of a service or simple-to-implement PHP library (or WordPress plug-in), please let me know! If not, let me know what the best solution to this problem is!

    Read the article

  • Phishing attack stuck with jsp loginAction.do page?

    - by user970533
    I'm testing a phishing website on a staged replica of an jsp web-application. I'm doing the usual attack which involves changing the post and action field of source code to divert to my own written jsp script capture the logins and redirect the victim to the original website. It looks easy, but trust me, it's has been me more then 2 weeks and I cannot write the logins to the text file. I have tested the jsp page on my local wamp server it works fine. In staged, when I click on the ok button for user/password field I'm taken to loginAction.do script. I checked this using the tamper data add-on on Firefox. The only way I was able to make my script run was to use burp proxy intercept the request and change action parameter to refer my uploaded script. I want to know what does an loginAction.do? I have googled it - it's quite common to see it in jsp application. I have checked the code; there is nothing that tells me why the page always points to the .do script instead of mine. Is there some kind of redirection in Tomcat? I like to know. I'm unable to exploit this attack vector? I need the community's help.

    Read the article

  • How do I compile a Wikipedia lens and install?

    - by user49523
    I read a tutorial about how to compile and install a Wikipedia lens, but it didn't work. The tutorial sounds easy - i just copied and pasted to the file that was suppose to edit. I have tried some times and here are 2 edits edit 1: import logging import optparse import gettext from gettext import gettext as _ gettext.textdomain('wikipedia') from singlet.lens import SingleScopeLens, IconViewCategory, ListViewCategory from wikipedia import wikipediaconfig import urllib2 import simplejson class WikipediaLens(SingleScopeLens): wiki = "http://en.wikipedia.org" def wikipedia_query(self,search): try: search = search.replace(" ", "|") url = ("%s/w/api.php?action=opensearch&limit=25&format=json&search=%s" % (self.wiki, search)) results = simplejson.loads(urllib2.urlopen(url).read()) print "Searching Wikipedia" return results[1] except (IOError, KeyError, urllib2.URLError, urllib2.HTTPError, simplejson.JSONDecodeError): print "Error : Unable to search Wikipedia" return [] class Meta: name = 'Wikipedia' description = 'Wikipedia Lens' search_hint = 'Search Wikipedia' icon = 'wikipedia.svg' search_on_blank=True # TODO: Add your categories articles_category = ListViewCategory("Articles", "dialog-information-symbolic") def search(self, search, results): for article in self.wikipedia_query(search): results.append("%s/wiki/%s" % (self.wiki, article), "http://upload.wikimedia.org/wikipedia/commons/6/63/Wikipedia-logo.png", self.articles_category, "text/html", article, "Wikipedia Article", "%s/wiki/%s" % (self.wiki, article)) pass edit 2: import urllib2 import simplejson import logging import optparse import gettext from gettext import gettext as _ gettext.textdomain('wikipediaa') from singlet.lens import SingleScopeLens, IconViewCategory, ListViewCategory from wikipediaa import wikipediaaconfig class WikipediaaLens(SingleScopeLens): wiki = "http://en.wikipedia.org" def wikipedia_query(self,search): try: search = search.replace(" ", "|") url = ("%s/w/api.php?action=opensearch&limit=25&format=json&search=%s" % (self.wiki, search)) results = simplejson.loads(urllib2.urlopen(url).read()) print "Searching Wikipedia" return results[1] except (IOError, KeyError, urllib2.URLError, urllib2.HTTPError, simplejson.JSONDecodeError): print "Error : Unable to search Wikipedia" return [] def search(self, search, results): for article in self.wikipedia_query(search): results.append("%s/wiki/%s" % (self.wiki, article), "http://upload.wikimedia.org/wikipedia/commons/6/63/Wikipedia-logo.png", self.articles_category, "text/html", article, "Wikipedia Article", "%s/wiki/%s" % (self.wiki, article)) pass class Meta: name = 'Wikipedia' description = 'Wikipedia Lens' search_hint = 'Search Wikipedia' icon = 'wikipedia.svg' search_on_blank=True # TODO: Add your categories articles_category = ListViewCategory("Articles", "dialog-information-symbolic") def search(self, search, results): # TODO: Add your search results results.append('https://wiki.ubuntu.com/Unity/Lenses/Singlet', 'ubuntu-logo', self.example_category, "text/html", 'Learn More', 'Find out how to write your Unity Lens', 'https://wiki.ubuntu.com/Unity/Lenses/Singlet') pass so .. what can i change in the edit ? (if anybody give me the entire edit file edited i will appreciate)

    Read the article

  • ubuntu apache2 start, stop, status, restart and reload commands fails

    - by Assil
    Hi, I am new to Ubuntu and I'm still trying to figure this OS out (for work it's brilliant, better than Windows). Before I wrote this question, I did my research via Google, this website and even in StackOverflow.com. Whatever error came up, I googled it but with no success on how to solve this. Back to the main point: I tried to install (lamp) apache2 with this guide (in German) and this one(in english). Then I got stuck at the start command (with which one is able to start the apache2 server), my first try to run the server was a success until i wrote sudo /etc/init.d/apache2 reload then it showed me an error: $ sudo /etc/init.d/apache2 reload * Starting web server apache2 Syntax error on line 1 of /etc/apache2/ports.conf: Invalid command 'oder', perhaps misspelled or defined by a module not included in the server configuration Action 'reload' failed. The Apache error log may have more information. [fail] I didn't even write the word "oder" so I closed the shell and opened it anew. It showed the same error. After that, I've done some research and found out that my file (index.html (which is in my /var/www folder)), which I access via the browser, when I type in http://localhost should show up and tell that it was a success. So I removed apache2 and installed it again but now the following errors appear: $ sudo /etc/init.d/apache2 start * Starting web server apache2 Syntax error on line 1 of /etc/apache2/ports.conf: Invalid command 'oder', perhaps misspelled or defined by a module not included in the server configuration Action 'start' failed. The Apache error log may have more information. [fail] And still, I didn't even write "oder" in the command. I appreciate every help I can get, further thanks and have a nice day.

    Read the article

  • Slick2D - Entities and rendering

    - by Zarkopafilis
    I have been trying to create my very first game for quite a while, followed some tutorials and stuff, but I am stuck at creating my entity system. I have made a class that extends the Entity class and here it is: public class Lazer extends Entity{//Just say that it is some sort of bullet private Play p;//Play class(State) private float x; private float y; private int direction; public Lazer(Play p, float x , float y, int direction){ this.p = p; this.x = x; this.y = y; this.direction = direction; p.ent.add(this); } public int getDirection(){ return direction; //this one specifies what value will be increased (x/y) at update } public float getX(){ return x; } public float getY(){ return y; } public void setY(float y){ this.y = y; } public void setX(float x){ this.x = x; } } The class seems pretty good , after speding some hours googling what would be the right thing. Now, on my Play class. I cant figure out how to draw them. (I have added them to an arraylist) On the update method , I update the lazers based on their direction: public void moveLazers(int delta){ for(int i=0;i<ent.size();i++){ Lazer l = ent.get(i); if(l.getDirection() == 1){ l.setX(l.getX() + delta * .1f); }else if(l.getDirection() == 2){ l.setX(l.getX() - delta * .1f); }else if(l.getDirection() == 3){ l.setY(l.getY() + delta * .1f); }else if(l.getDirection() == 4){ l.setY(l.getY() - delta * .1f); } } } Now , I am stuck at the render method. Anyway , is this the correct way of doing this or do I need to change stuff? Also I need to know if collision detection needs to be in the update method. Thanks in advance ~ Teo Ntakouris

    Read the article

  • Unity scaling instantiated GameObject at Start() doesn't "keep"

    - by Shivan Dragon
    I have a very simple scenario: A box-like Prefab which is imported from Blender automatically (I have the .blend file in the Assets folder). A script that has two public GameObject fields. In one I place the above prefab, and in the other I place a terrain object (which I've created in Unity's graphical view): public Collider terrain; public GameObject aStarCellHighlightPrefab; This script is attached to the camera. The idea is to have the Blender prefab instantiated, have the terrain set as its parent, and then scale said prefab instance up. I first did it like this, in the Start() method: void Start () { cursorPositionOnTerrain = new RaycastHit(); aStarCellHighlight = (GameObject)Instantiate(aStarCellHighlightPrefab, new Vector3(300,300,300), terrain.transform.rotation); aStarCellHighlight.name = "cellHighlight"; aStarCellHighlight.transform.parent = terrain.transform; aStarCellHighlight.transform.localScale = new Vector3(100,100,100); } and first thought it didn't work. However later I noticed that it did in fact work, in the sense where the scale was applied right at the start, but then right after the prefab instance came back to its initial scale. Putting the scale code in the Update() methods fixes it in the sense where now it stays scaled all the time: void Update () { aStarCellHighlight.transform.localScale = new Vector3(100,100,100); //... } However I've noticed that when I run this code, the object is first displayed without the scale being applied, and it takes about 5-10 seconds for the scale to happen. During this time everything works fine (like input and logging, etc). The scene is very simple, it's not like it has a lot of stuff to load or anything (there's a Ray cast from the camera on to the terrain, but that seems to happen without such delays). My (2 part) question is: Why doesn't it take the scale transform when I do it at the beginning in the Start() method. Why do I have to keep scaling it in the Update() method? Why does it take so long for the scale to "apply/show up".

    Read the article

  • One page using querystring or many folders and pages?

    - by ClarkeyBoy
    I have an application where I have the 'core' code in one folder for which there is a virtual directory in the root, such that I can include any core files using /myApp/core/bla.asp. I then have two folders outside of this with a default.asp which currently use the querystring to define what page should be displayed. One page is for general users, the other will only be accessible to users who have permission to manage users / usergroups / permissions. The core code checks the querystring and then checks the permissions for that user. An example of this as it is now is default.asp?action=view&viewtype=list&objectid=server. I am not worried about SEO as this is an internal app and uses Windows Auth. My question is, is it better the way it is now or would it be better to have something like the following: /server/view/list/ /server/view/?id=123 /server/create/ /server/edit/?id=123 /server/remove/?id=123 In the above folders I would have a home page which defines all the variables which are currently determined by the querystring - in /server/create/ for example, I would define the action as 'create', object name as 'server' and so on. In terms of future development, I really have no idea which method would be best. I think the 2nd method would be best in terms of following what page does what but this is such a huge change to make at this stage that I would really like some opinions, preferably based on experience. PS Sorry if the tags are wrong - I am new to this forum and thought this was a bit too much of a discussion for StackOverflow as that is very much right / wrong answer based. I got the idea SE is more discussion based.

    Read the article

  • Textures quality issues with Libgdx

    - by user1876708
    I have drawn several vector objets and characters ( in Adobe Illustrator ) for my game on Android. They are all scalable at any size without any quality losses ( of course it's vector ^^ ). I have tried to simulate my gameboard directly on Illustrator just before setting my assests on libdgx to implement them in my game. I set all the objects at the good size, so that they fit perfectly on my XHDPI device I am running my test on. As you can see it works great ( for me at least ^^ ), the PNG quality is good for me, as expected ! So I have edited all my PNG at this size, set my assets on libgdx and build my game apk. And here is a screenshot of my gameboard ( don't pay attention at the differences of placing and objects, but check at the objets presents on both screenshot ). As you can see, I have a loss of my PNG quality in the game. It can be seen clealry on the hedgehog PNG, but also ( but not as obvious ) on the mushroom ( check at the outline ) and the hole PNG. If you really pay attention, on every objects, you can see pixels that are not visible on my first screenshot. And I just can't figure out why this is happening Oo If you have any ideas, you are very welcome ! Thanks. PS : You can check more clearly the 2 gameboard on this two links ( look at them at 100%, display at high resolution ) : Good quality link, from Illustrator Poor quality link, from the game Second phase of tests : We display an object ( the hedgehog ) on our main menu screen to see how it looks like. The things is that it looks like he is suppose to, which means, high quality with no pixels. The hedgehog PNG is coming from an atlas : layer.addActor(hedgehog); No loss of quality with this method So we think the problem is comming from the method we are using to display it on our gameboard : blocks[9][3] = new Block(TextureUtils.hedgehog, new Vector2(9, 3)); the block is getting the size from the vector we are associating to it, but we have a loss of quality with this method.

    Read the article

< Previous Page | 304 305 306 307 308 309 310 311 312 313 314 315  | Next Page >