Search Results

Search found 17476 results on 700 pages for 'static route'.

Page 417/700 | < Previous Page | 413 414 415 416 417 418 419 420 421 422 423 424  | Next Page >

  • Slick2D + LWJGL collision system

    - by Connor W
    So I've been learning java for a while and have explored slick and lwjgl before but went away from using Slick for a while. But I've recently gone back to using it (as I'm making a platformer and Tiled will be really helpful). But here's where my problems begin: collision. I have a player polygon and I check to see if it's colliding with my tiled map with this method: public static boolean playerCollisionWith() { for(int i = 0; i < Blockmap.entities.size(); i++) { Block entity1 = (Block) Blockmap.entities.get(i); if(playerPoly.intersects(entity1.poly)) { return true; } } return false; } This would work normally but I'm using a different method for movement. Instead of just adding a speed variable to the player's x axis. I move like this: if(Keyboard.isKeyDown(Keyboard.KEY_RIGHT)) { speedX = Math.min(5, speedX + 1); moving = true; playerPoly.setX(x); if(playerCollisionWith()) { speedX = -5; playerPoly.setX(x); } } That Math.min call is what is messing me up =. I can't just call speedX = -5, because when I do the player "bounces" when the right mouse button is down and it's colliding. Bounces as in flashes back and forth REALLY quickly. But I don't really know how I would make it so that collisions on the y axis would work either, whether the player is jumping or not. So if I could get some help with how to fix this problem that would be great. Thank you for the help!

    Read the article

  • iTunes Title Bar is Equal to the Location of the Library file?

    - by Urda
    I have never been able to get a clear answer on why the latest edition of iTunes does this. I have my entire iTunes library located in C:\itunes\ and the library data files inside C:\itunes\!library_info for backup purposes. However when version 9 of iTunes came out it went from having iTunes as the title, to !library_info. Anyway to get around this without moving my data files away? Annoying "feature" if that is what it is. Again Apple support and forums were of no help to me. Anyone have insight on this? System Info: Windows Vista x86 Ultimate, latest updates. iTunes Version 9.0.3.15 Screenshot: http://farm5.static.flickr.com/4072/4414557544_d0b25eb64c_o.jpg Flickr Page: http://www.flickr.com/photos/urda/4414557544/

    Read the article

  • Generic Repository with SQLite and SQL Compact Databases

    - by Andrew Petersen
    I am creating a project that has a mobile app (Xamarin.Android) using a SQLite database and a WPF application (Code First Entity Framework 5) using a SQL Compact database. This project will even eventually have a SQL Server database as well. Because of this I am trying to create a generic repository, so that I can pass in the correct context depending on which application is making the request. The issue I ran into is my DataContext for the SQL Compact database inherits from DbContext and the SQLite database inherits from SQLiteConnection. What is the best way to make this generic, so that it doesn't matter what kind of database is on the back end? This is what I have tried so far on the SQL Compact side: public interface IRepository<TEntity> { TEntity Add(TEntity entity); } public class Repository<TEntity, TContext> : IRepository<TEntity>, IDisposable where TEntity : class where TContext : DbContext { private readonly TContext _context; public Repository(DbContext dbContext) { _context = dbContext as TContext; } public virtual TEntity Add(TEntity entity) { return _context.Set<TEntity>().Add(entity); } } And on the SQLite side: public class ElverDatabase : SQLiteConnection { static readonly object Locker = new object(); public ElverDatabase(string path) : base(path) { CreateTable<Ticket>(); } public int Add<T>(T item) where T : IBusinessEntity { lock (Locker) { return Insert(item); } } }

    Read the article

  • Replace Whole Site by FTP

    - by Sam Machin
    Hi There, I've got a set of tools which periodically (about once a day) generate a complete set of static HTML pages for a site with associated folder structure etc. I then need to put those file onto the production server, my problem is that the server runs IIS(6 I think) and I only have regular FTP access. I need a way to automate the process of publishing the new site and it needs to a total replacement of the files each time its published, eg delete the whole folder & contents then put the new ones up. My source server is a ubuntu machine and I've got total control at that end, I have tried using CurlFTpFS but it seems to be too slow for what I'm trying to do and locks up. Rgds Sam

    Read the article

  • SQLServer 2008 FailOver and Load Balancing

    - by Jedi Master Spooky
    I have a project with a 2TB database ( 450.000.000 rows). I need to provide to the proyect a solution that gives FailOver and load Balacing, what do you recommend? We are going to use a NetApp Filer for the Data Files and for the File System of the Proyect. I read that SQl Clustering does not provide load balacing. If I cannnot have this feature and I have to go only to the FailOver what Server ( I presume that the key feature here is memory) would you recomend. We are adding 1.000.000 rows a day. Once the rows is inserted we are doing a lot of updates to that row for about 1 week then the row get static. Because of this I am thinking in some kind of history table or database or something like that. I am open to the Os servers implementation, I was thinking of a windows 2008 server with cluster but this depend of the database solution

    Read the article

  • Designing a Content-Based ETL Process with .NET and SFDC

    - by Patrick
    As my firm makes the transition to using SFDC as our main operational system, we've spun together a couple of SFDC portals where we can post customer-specific documents to be viewed at will. As such, we've had the need for pseudo-ETL applications to be implemented that are able to extract metadata from the documents our analysts generate internally (most are industry-standard PDFs, XML, or MS Office formats) and place in networked "queue" folders. From there, our applications scoop of the queued documents and upload them to the appropriate SFDC CRM Content Library along with some select pieces of metadata. I've mostly used DbAmp to broker communication with SFDC (DbAmp is a Linked Server provider that allows you to use SQL conventions to interact with your SFDC Org data). I've been able to create [console] applications in C# that work pretty well, and they're usually structured something like this: static void Main() { // Load parameters from app.config. // Get documents from queue. var files = someInterface.GetFiles(someFilterOrRegexPattern); foreach (var file in files) { // Extract metadata from the file. // Validate some attributes of the file; add any validation errors to an in-memory // structure (e.g. List<ValidationErrors>). if (isValid) { var fileData = File.ReadAllBytes(file); // Upload using some wrapper for an ORM or DAL someInterface.Upload(fileData, meta.Param1, meta.Param2, ...); } else { // Bounce the file } } // Report any validation errors (via message bus or SMTP or some such). } And that's pretty much it. Most of the time I wrap all these operations in a "Worker" class that takes the needed interfaces as constructor parameters. This approach has worked reasonably well, but I just get this feeling in my gut that there's something awful about it and would love some feedback. Is writing an ETL process as a C# Console app a bad idea? I'm also wondering if there are some design patterns that would be useful in this scenario that I'm clearly overlooking. Thanks in advance!

    Read the article

  • Slash Notation IP - What is what?

    - by Nirmal
    We just signed up with a new ISP and we got a static IP from them. Our previous ISP just gave one IP and we were able to configure our web server using that. Now, we have got this new IP with a slash notation. This type is new to me. When I used the CIDR calculator, it gave me the following results: 202.184.7.52/30 IP: 202.184.7.52 Netmask: 255.255.255.252 Number of hosts: 2 Network address: 202.184.7.52 Broadcast address: 202.184.7.55 Can someone please help me by explaining what these are? I could not understand what the number of hosts means. Is that telling that I can use two different IP for DNS (A) records? Also, which one should I setup in my router? The network address or broadcast address? Thank you very much for any answer you may provide.

    Read the article

  • MEncoder Install on Ubuntu

    - by Tauqeer Ahmad
    I am writing this after checking almost all the posts but none of those solved my problem. I am trying to install mencoder to process some videos but there are strange errors coming. For examples when I try sudo apt-get install mencoder the following errors comes out: Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: mencoder : Depends: mplayer Depends: libasound2 (> 1.0.24.1) Depends: libavcodec53 (>= 4:0.8~beta2-2) but it is not installable or libavcodec-extra-53 (>= 4:0.8~beta2-2) but it is not going to be installed Depends: libavformat53 (>= 4:0.8~beta2-2) but it is not installable or libavformat-extra-53 (>= 4:0.8~beta2-2) but it is not going to be installed Depends: libcdparanoia0 (>= 3.10.2+debian) but it is not installable Depends: libenca0 (>= 1.9) but it is not installable Depends: libfontconfig1 (>= 2.8.0) but it is not installable Depends: libgif4 (>= 4.1.4) but it is not installable Depends: libjpeg8 (>= 8c) but it is not installable Depends: liblzo2-2 but it is not installable Depends: libsmbclient (>= 3.0.24) but it is not installable Depends: libspeex1 (>= 1.2~beta3-1) but it is not installable Depends: libtheora0 (>= 1.0) but it is not installable E: Unable to correct problems, you have held broken packages. Can anyone help to solve this issue. I tried to find static builds of MEncoder but could not.

    Read the article

  • Extreme Optimization –Mathematical Constants and Basic Functions

    - by JoshReuben
    Machine constants The MachineConstants class - contains constants for floating-point arithmetic because the CLS System.Single and Double floating-point types do not follow the standard conventions and are useless. machine constants for the Double type: machine precision: Epsilon , SqrtEpsilon CubeRootEpsilon largest possible value: MaxDouble , SqrtMaxDouble, LogMaxDouble smallest Double-precision floating point number that is greater than zero: MinDouble , SqrtMinDouble , LogMinDouble A similar set of constants is available for the Single Datatype  Mathematical Constants The Constants class contains static fields for many mathematical constants and common expressions involving small integers – if you are doing thousands of iterations, you wouldn't want to calculate OneOverSqrtTwoPi , Sqrt17 or Log17 !!! Fundamental constants E - The base for the natural logarithm, e (2.718...). EulersConstant - (0.577...). GoldenRatio - (1.618...). Pi - the ratio between the circumference and the diameter of a circle (3.1415...). Expressions involving fundamental constants: TwoPi, PiOverTwo, PiOverFour, LogTwoPi, PiSquared, SqrPi, SqrtTwoPi, OneOverSqrtPi, OneOverSqrtTwoPi Square roots of small integers: Sqrt2, Sqrt3, Sqrt5, Sqrt7, Sqrt17 Logarithms of small integers: Log2, Log3, Log10, Log17, InvLog10  Elementary Functions The IterativeAlgorithm<T> class in the Extreme.Mathematics namespace defines many elementary functions that are missing from System.Math. Hyperbolic Trig Functions: Cosh, Coth, Csch, Sinh, Sech, Tanh Inverse Hyperbolic Trig Functions: Acosh, Acoth, Acsch, Asinh, Asech, Atanh Exponential, Logarithmic and Miscellaneous Functions: ExpMinus1 - The exponential function minus one, ex-1. Hypot - The hypotenuse of a right-angled triangle with specified sides. LambertW - Lambert's W function, the (real) solution W of x=WeW. Log1PlusX - The natural logarithm of 1+x. Pow - A number raised to an integer power.

    Read the article

  • Intercept method calls in Groovy for automatic type conversion

    - by kerry
    One of the cooler things you can do with groovy is automatic type conversion.  If you want to convert an object to another type, many times all you have to do is invoke the ‘as’ keyword: def letters = 'abcdefghijklmnopqrstuvwxyz' as List But, what if you are wanting to do something a little fancier, like converting a String to a Date? def christmas = '12-25-2010' as Date ERROR org.codehaus.groovy.runtime.typehandling.GroovyCastException: Cannot cast object '12-25-2010' with class java.lang.String' to class 'java.util.Date' No bueno! I want to be able to do custom type conversions so that my application can do a simple String to Date conversion. Enter the metaMethod. You can intercept method calls in Groovy using the following method: def intercept(name, params, closure) { def original = from.metaClass.getMetaMethod(name, params) from.metaClass[name] = { Class clazz -> closure() original.doMethodInvoke(delegate, clazz) } } Using this method, and a little syntactic sugar, we create the following ‘Convert’ class: // Convert.from( String ).to( Date ).using { } class Convert { private from private to private Convert(clazz) { from = clazz } static def from(clazz) { new Convert(clazz) } def to(clazz) { to = clazz return this } def using(closure) { def originalAsType = from.metaClass.getMetaMethod('asType', [] as Class[]) from.metaClass.asType = { Class clazz -> if( clazz == to ) { closure.setProperty('value', delegate) closure(delegate) } else { originalAsType.doMethodInvoke(delegate, clazz) } } } } Now, we can make the following statement to add the automatic date conversion: Convert.from( String ).to( Date ).using { new java.text.SimpleDateFormat('MM-dd-yyyy').parse(value) } def christmas = '12-25-2010' as Date Groovy baby!

    Read the article

  • C#/XNA get hardware mouse position

    - by Sunder
    I'm using C# and trying to get hardware mouse position. First thing I tryed was simple XNA functionality that is simple to use Vector2 position = new Vector2(Mouse.GetState().X, Mouse.GetState().Y); After that i do the drawing of mouse as well, and comparing to windows hardware mouse, this new mouse with xna provided coordinates is "slacking off". By that i mean, that it is behind by few frames. For example if game is runing at 600 fps, of curse it will be responsive, but at 60 fps software mouse delay is no longer acceptable. Therefore I tried using what I thought was a hardware mouse, [DllImport("user32.dll")] [return: MarshalAs(UnmanagedType.Bool)] public static extern bool GetCursorPos(out POINT lpPoint); but the result was exactly the same. I also tried geting Windows form cursor, and that was a dead end as well - worked, but with the same delay. Messing around with xna functionality: GraphicsDeviceManager.SynchronizeWithVerticalRetrace = true/false Game.IsFixedTimeStep = true/fale did change the delay time for somewhat obvious reasons, but the bottom line is that regardless it still was behind default Windows mouse. I'v seen in some games, that they provide option for hardware acelerated mouse, and in others(I think) it is already by default. Can anyone give some lead on how to achieve that.

    Read the article

  • What are the best options for a root filesystem hosted on SSD under Linux

    - by stsquad
    I'm working on an embedded system which is going to be booting and hosting it's rootfs on an SSD disk. We are currently looking at using Intel X-18M SSDs. The file system structure will have a fairly static /usr section (modulo software upgrades) and an active /var and /var/log for maintaining state and logging. Given the wear-levelling done by the underlying flash does having separate partitions help or hinder? As modern SSDs appear as straight block devices and hide their mapping magic behind their firmware is there any point trying to optimise the choice of file-system that sits on-top of the SSD? Finally does enable SMART monitoring make any sense in this context or are their SSD specific ways of determining the underlying health of the storage hardware?

    Read the article

  • Can't connect to smtp (postfix, dovecot) after making a change and trying to change it back

    - by UberBrainChild
    I am using postfix and dovecot along with zpanel and I tried enabling SSL and then turned it off as I did not have SSL configured yet and I realized it was a bit stupid at the time. I am using CentOS 6.4. I get the following error in the mail log. (I changed my host name to "myhostname" and my domain to "mydomain.com") Oct 20 01:49:06 myhostname postfix/smtpd[4714]: connect from mydomain.com[127.0.0.1] Oct 20 01:49:16 myhostname postfix/smtpd[4714]: fatal: no SASL authentication mechanisms Oct 20 01:49:17 myhostname postfix/master[4708]: warning: process /usr/libexec/postfix/smtpd pid 4714 exit status 1 Oct 20 01:49:17 amyhostname postfix/master[4708]: warning: /usr/libexec/postfix/smtpd: bad command startup -- throttling Reading on forums and similar questions I figured it was just a service that was not running or installed. However I can see that saslauthd is currently up and running on my system and restarting it does not help. Here is my postfix master.cf # # Postfix master process configuration file. For details on the format # of the file, see the Postfix master(5) manual page. # # ***** Unused items removed ***** # ========================================================================== # service type private unpriv chroot wakeup maxproc command + args # (yes) (yes) (yes) (never) (100) # ========================================================================== smtp inet n - n - - smtpd # -o content_filter=smtp-amavis:127.0.0.1:10024 # -o receive_override_options=no_address_mappings pickup fifo n - n 60 1 pickup submission inet n - - - - smtpd -o content_filter= -o receive_override_options=no_header_body_checks cleanup unix n - n - 0 cleanup qmgr fifo n - n 300 1 qmgr #qmgr fifo n - n 300 1 oqmgr tlsmgr unix - - n 1000? 1 tlsmgr rewrite unix - - n - - trivial-rewrite bounce unix - - n - 0 bounce defer unix - - n - 0 bounce trace unix - - n - 0 bounce verify unix - - n - 1 verify flush unix n - n 1000? 0 flush proxymap unix - - n - - proxymap smtp unix - - n - - smtp smtps inet n - - - - smtpd # When relaying mail as backup MX, disable fallback_relay to avoid MX loops relay unix - - n - - smtp -o fallback_relay= # -o smtp_helo_timeout=5 -o smtp_connect_timeout=5 showq unix n - n - - showq error unix - - n - - error discard unix - - n - - discard local unix - n n - - local virtual unix - n n - - virtual lmtp unix - - n - - lmtp anvil unix - - n - 1 anvil scache unix - - n - 1 scache # # ==================================================================== # Interfaces to non-Postfix software. Be sure to examine the manual # pages of the non-Postfix software to find out what options it wants. # ==================================================================== maildrop unix - n n - - pipe flags=DRhu user=vmail argv=/usr/local/bin/maildrop -d ${recipient} uucp unix - n n - - pipe flags=Fqhu user=uucp argv=uux -r -n -z -a$sender - $nexthop!rmail ($recipient) ifmail unix - n n - - pipe flags=F user=ftn argv=/usr/lib/ifmail/ifmail -r $nexthop ($recipient) bsmtp unix - n n - - pipe flags=Fq. user=foo argv=/usr/local/sbin/bsmtp -f $sender $nexthop $recipient # # spam/virus section # smtp-amavis unix - - y - 2 smtp -o smtp_data_done_timeout=1200 -o disable_dns_lookups=yes -o smtp_send_xforward_command=yes 127.0.0.1:10025 inet n - y - - smtpd -o content_filter= -o smtpd_helo_restrictions= -o smtpd_sender_restrictions= -o smtpd_recipient_restrictions=permit_mynetworks,reject -o mynetworks=127.0.0.0/8 -o smtpd_error_sleep_time=0 -o smtpd_soft_error_limit=1001 -o smtpd_hard_error_limit=1000 -o receive_override_options=no_header_body_checks -o smtpd_bind_address=127.0.0.1 -o smtpd_helo_required=no -o smtpd_client_restrictions= -o smtpd_restriction_classes= -o disable_vrfy_command=no -o strict_rfc821_envelopes=yes # # Dovecot LDA dovecot unix - n n - - pipe flags=DRhu user=vmail:mail argv=/usr/libexec/dovecot/deliver -d ${recipient} # # Vacation mail vacation unix - n n - - pipe flags=Rq user=vacation argv=/var/spool/vacation/vacation.pl -f ${sender} -- ${recipient} And here is dovecot ## ## Dovecot config file ## listen = * disable_plaintext_auth = no protocols = imap pop3 lmtp sieve auth_mechanisms = plain login passdb { driver = sql args = /etc/zpanel/configs/dovecot2/dovecot-mysql.conf } userdb { driver = sql } userdb { driver = sql args = /etc/zpanel/configs/dovecot2/dovecot-mysql.conf } mail_location = maildir:/var/zpanel/vmail/%d/%n first_valid_uid = 101 #last_valid_uid = 0 first_valid_gid = 12 #last_valid_gid = 0 #mail_plugins = mailbox_idle_check_interval = 30 secs maildir_copy_with_hardlinks = yes service imap-login { inet_listener imap { port = 143 } } service pop3-login { inet_listener pop3 { port = 110 } } service lmtp { unix_listener lmtp { #mode = 0666 } } service imap { vsz_limit = 256M } service pop3 { } service auth { unix_listener auth-userdb { mode = 0666 user = vmail group = mail } # Postfix smtp-auth unix_listener /var/spool/postfix/private/auth { mode = 0666 user = postfix group = postfix } } service auth-worker { } service dict { unix_listener dict { mode = 0666 user = vmail group = mail } } service managesieve-login { inet_listener sieve { port = 4190 } service_count = 1 process_min_avail = 0 vsz_limit = 64M } service managesieve { } lda_mailbox_autocreate = yes lda_mailbox_autosubscribe = yes protocol lda { mail_plugins = quota sieve postmaster_address = [email protected] } protocol imap { mail_plugins = quota imap_quota trash imap_client_workarounds = delay-newmail } lmtp_save_to_detail_mailbox = yes protocol lmtp { mail_plugins = quota sieve } protocol pop3 { mail_plugins = quota pop3_client_workarounds = outlook-no-nuls oe-ns-eoh } protocol sieve { managesieve_max_line_length = 65536 managesieve_implementation_string = Dovecot Pigeonhole managesieve_max_compile_errors = 5 } dict { quotadict = mysql:/etc/zpanel/configs/dovecot2/dovecot-dict-quota.conf } plugin { # quota = dict:User quota::proxy::quotadict quota = maildir:User quota acl = vfile:/etc/dovecot/acls trash = /etc/zpanel/configs/dovecot2/dovecot-trash.conf sieve_global_path = /var/zpanel/sieve/globalfilter.sieve sieve = ~/dovecot.sieve sieve_dir = ~/sieve sieve_global_dir = /var/zpanel/sieve/ #sieve_extensions = +notify +imapflags sieve_max_script_size = 1M #sieve_max_actions = 32 #sieve_max_redirects = 4 } log_path = /var/log/dovecot.log info_log_path = /var/log/dovecot-info.log debug_log_path = /var/log/dovecot-debug.log mail_debug=yes ssl = no Does anyone have any ideas or tips on what I can try to get this working? Thanks for all the help EDIT: Output of postconf -n alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases broken_sasl_auth_clients = yes command_directory = /usr/sbin config_directory = /etc/postfix daemon_directory = /usr/libexec/postfix debug_peer_level = 2 delay_warning_time = 4 disable_vrfy_command = yes html_directory = no inet_interfaces = all mail_owner = postfix mailq_path = /usr/bin/mailq.postfix manpage_directory = /usr/share/man mydestination = localhost.$mydomain, localhost mydomain = control.yourdomain.com myhostname = control.yourdomain.com mynetworks = all newaliases_path = /usr/bin/newaliases.postfix queue_directory = /var/spool/postfix readme_directory = /usr/share/doc/postfix-2.2.2/README_FILES recipient_delimiter = + relay_domains = proxy:mysql:/etc/zpanel/configs/postfix/mysql-relay_domains_maps.cf sample_directory = /usr/share/doc/postfix-2.2.2/samples sendmail_path = /usr/sbin/sendmail.postfix setgid_group = postdrop smtp_use_tls = no smtpd_client_restrictions = smtpd_data_restrictions = reject_unauth_pipelining smtpd_helo_required = yes smtpd_helo_restrictions = smtpd_recipient_restrictions = permit_sasl_authenticated, permit_mynetworks, reject_unauth_destination, reject_non_fqdn_sender, reject_non_fqdn_recipient, reject_unknown_recipient_domain smtpd_sasl_auth_enable = yes smtpd_sasl_local_domain = $myhostname smtpd_sasl_path = private/auth smtpd_sasl_security_options = noanonymous smtpd_sasl_type = dovecot smtpd_sender_restrictions = smtpd_use_tls = no soft_bounce = yes transport_maps = hash:/etc/postfix/transport unknown_local_recipient_reject_code = 550 virtual_alias_maps = proxy:mysql:/etc/zpanel/configs/postfix/mysql-virtual_alias_maps.cf, regexp:/etc/zpanel/configs/postfix/virtual_regexp virtual_gid_maps = static:12 virtual_mailbox_base = /var/zpanel/vmail virtual_mailbox_domains = proxy:mysql:/etc/zpanel/configs/postfix/mysql-virtual_domains_maps.cf virtual_mailbox_maps = proxy:mysql:/etc/zpanel/configs/postfix/mysql-virtual_mailbox_maps.cf virtual_minimum_uid = 101 virtual_transport = dovecot virtual_uid_maps = static:101

    Read the article

  • JSP Include: one large bean or bean for each include

    - by shylynx
    I want to refactor a webapp that consists of very distorted JSPs and servlets. Because we can't switch to a web framework easily we have to keep JSPs and Servlets, and now we are in doubt how to include pages into another and how to setup the use:bean-directives effectively. At the first step we want to decouple the code for the core-actions and the bean-creation into servlets. The servlets should forward to their corresponding pages, which should use the bean. The problem here is, that each jsp consists of different sub- and sub-sub-jsp that are included into another. Here is a shortend extract (because reality is more complex): head header top navigation actionspanel main header actionspanel foot footer Moreover each jsp (also the header and footer) use dynamic data. For example title and actionspanel can change on each page-reload or do have links and labels that depend on the processing by the preceding servlet. I know that jsp-include-directives should only be used for static content und should be avoided for dynamic content. But here we have very large pages, that consist of many parts. Now the core questions: Should I use one big bean for each page, so that each bean holds also data for header and footer beside its core data, so that each subsequent included jsp uses the same bean-directive? For example: DirectoryJSP <- DirectoryBean CompareJSP <- CompareBean Or should I use one bean for each jsp, so that each bean only holds the data for one jsp and its own purpose. For example: DirectoryJSP <- DirectoryBean HeaderJSP <- HeaderBean FooterJSP <- FooterBean CompareJSP <- CompareBean HeaderJSP <- HeaderBean FooterJSP <- FooterBean In the second case: should the subsequent beans be a member of the corresponding parent bean, so that only the parent bean is attached as attribute to the request? Or should each bean attached to the request?

    Read the article

  • LWJGL texture bleeding fix won't work

    - by user1990950
    I tried a lot of things to fix texture bleeding, but nothing works. I don't want to add a transparent border around my textures, because I already got too many and it would take too much time and I can't do it with code because I'm loading textures with slick. My textures are seperate textures and they seem to wrap on the other side (texture bleeding). Here are the textures that are "bleeding": The head, body, arm and leg are seperate textures. Here's the code I'm using to draw a texture: public static void drawTextureN(Texture texture, Vector2f position, Vector2f translation, Vector2f origin,Vector2f scale,float rotation, Color color, FlipState flipState) { texture.setTextureFilter(GL11.GL_NEAREST); color.bind(); texture.bind(); GL11.glTexParameteri(GL11.GL_TEXTURE_2D, GL11.GL_TEXTURE_WRAP_S, GL12.GL_CLAMP_TO_EDGE); GL11.glTexParameteri(GL11.GL_TEXTURE_2D, GL11.GL_TEXTURE_WRAP_T, GL12.GL_CLAMP_TO_EDGE); GL11.glTexParameteri(GL11.GL_TEXTURE_2D, GL11.GL_TEXTURE_MAG_FILTER, GL11.GL_NEAREST); GL11.glTexParameteri(GL11.GL_TEXTURE_2D, GL11.GL_TEXTURE_MIN_FILTER, GL11.GL_NEAREST); GL11.glTranslatef((int)position.x, (int)position.y, 0); GL11.glTranslatef(-(int)translation.x, -(int)translation.y, 0); GL11.glRotated(rotation, 0f, 0f, 1f); GL11.glScalef(scale.x, scale.y, 1); GL11.glTranslatef(-(int)origin.x, -(int)origin.y, 0); float pixelCorrection = 0f; GL11.glBegin(GL11.GL_QUADS); GL11.glTexCoord2f(0,0); GL11.glVertex2f(0,0); GL11.glTexCoord2f(1,0); GL11.glVertex2f(texture.getTextureWidth(),0); GL11.glTexCoord2f(1,1); GL11.glVertex2f(texture.getTextureWidth(),texture.getTextureHeight()); GL11.glTexCoord2f(0,1); GL11.glVertex2f(0,texture.getTextureHeight()); GL11.glEnd(); GL11.glLoadIdentity(); } I tried a half pixel correction but it didn't make any sense because GL12.GL_CLAMP_TO_EDGE. I set pixelCorrection to 0, but it still wont work.

    Read the article

  • Dual LAN Printing

    - by Christopher
    I want to use Ubuntu 10.10 Server in a classroom, a computer lab whose bandwidth is provided by a local cable ISP. That's no problem, though the school network has an IP printer that I want to use. I cannot reach the printer through the cable Internet. But, I have two network cards. How is it possible to use both networks at once? eth0 (static 192.168.1.254) is plugged into a four-port router, 192.168.1.1. On the public side of the four-port router is Internet provided by the cable company. I also have the classroom workstations plugged into a switch. The switch is plugged into the four-port router. The whole classroom is wired into the cable Internet. The other NIC, eth1, could it be plugged into an Ethernet jack in the wall? It uses the school network, and I might receive by DHCP an IP address like 10.140.10.100, with the printer on maybe 10.120.50.10. I was thinking about installing the printer on the server so that it could be shared with the workstations. But how does this work? Can I just plug eth1 into the school network and access both LANs? Thanks for any insight, Chris

    Read the article

  • Netsh commands not working on remote computer

    - by Mike Christiansen
    Hello, At work, we are in the process of migrating over 200 computers from static IPs to DHCP. The DHCP server is configured. My biggest hurdle is physically going to every single computer in the area and configuring them all for DHCP. I am trying to use netsh to accomplish this. However, I cannot even seem to set one computer to DHCP remotely. The command I am trying is: netsh -r COMPUTERNAME interface ip set address name="Local Area Connection" source=dhcp netsh -r COMPUTERNAME interface ip set dns name="Local Area Connection" source=dhcp This results in the error The following command was not found: interface ip set address "name=Local Area Connection" source=dhcp. Any ideas?

    Read the article

  • OSX Server 3, Mac clients binding to OD and Profile Manager failing

    - by dbf
    I've made a setup containing a Mac Mini with OSX Server 3 (Mavericks 10.9.2) using Open Directory and Profile Manager (Mail, etc all set up and working). Now the thing is, internally on the local network, everything works great. Clients can bind to the OD and the users are able to login. I can install trust and settings profiles (either custom or group profiles) and all services in the profiles mentioned are being configured correctly. I can log in and out, hump around and do it a 100 times on different macs with different users, it works. My goal is to make this service publicly. The domain is with a FQDN which I own, for simplicity let's say server.domain.com. Now the only way for me to bind the clients to the OD is using LDAP mapping RCF2307 (without SSL) and a DN suffix of dc=server,dc=domain,dc=com using the Directory Utility. The options from server, or open directory will throw several errors like Connection failed to node '/LDAPv3/server.domain.com (2100). First of all I don't really understand the problem why clients can't bind to the OD like it does locally, with and without SSL (all ports are open, literally all ports are open, not just 389,636 and 1640, wasn't sure if I was missing any). When the clients are using LDAP mapping RFC2307 to bind (without SSL only), clients are able to authenticate, login and even load the Trust profile. But every Settings profile will fail with a Debug Message: Unable to find GUID in user record OD or fail to install saying missing user identification. Is there any way to get this to work without RFC2307? Because there is quite some stuff missing when using RFC2307 and not pull the mapping from the server or use open directory. Is this setup even possible? Or should I use VPN to authenticate with the OD? The network setup is a Modem/Router (DHCP off) with WAN NATted to an Airport Extreme (Using DHCP+NAT). The AE does notify with a double NAT message but I haven't had any problems with it on any other service. So WAN - 192.168.2.220 (static), AE - 10.0.1.* (dhcp) Output of DIG from the outside using dig server.domain.com ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;server.domain.com. IN A ;; ANSWER SECTION: server.domain.com. 77 IN A 91.50.*.* (valid WAN IP) ;; SERVER 172.*.*.1#53(172.*.*.1) (iPhone) DIG locally from a client and server (same output) ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 2, ADDITIONAL: 0 ;; QUESTION SECTION: ;server.domain.com. IN A ;; ANSWER SECTION: server.domain.com. 10800 IN A 10.0.1.11 ;; AUTHORITY SECTION: server.domain.com. 10800 IN NS domain.com. (used for email send in relay) server.domain.com. 10800 IN NS server.domain.com. ;; SERVER 10.0.1.11#53(10.0.1.11) Are there any things I should check? Only have OSX. -- double NAT issue, plugged in the server directly on the Modem/Router with a static IP and issue remains. Guess that rules out the double NAT thing. -- changeip -checkhostname comes with There is nothing to change, e.g. success. Primary address = 10.0.1.11 Current HostName = server.domain.com DNS HostName = server.domain.com For now, I've made a workaround by using an admin account that forces a permanent VPN connection on boot. That means before it comes to the login, a connection is already made or underway. I will continue this post when I have more time, also locating all the necessary .log files of each application involved. I have some suspicions but have to debug a bit more when I have more time on my hands .. Unless, of course, I get sidetracked with having a life. Which is arguably not very likely. krypted.com

    Read the article

  • Unit test SHA256 wrapper queries

    - by Sam Leach
    I am just beginning to write unit tests. So please bear with me. I have the following SHA256 wrapper. public static string SHA256(string plainText) { StringBuilder sb = new StringBuilder(); SHA256CryptoServiceProvider provider = new SHA256CryptoServiceProvider(); var hashedBytes = provider.ComputeHash(Encoding.UTF8.GetBytes(plainText)); for (int i = 0; i < hashedBytes.Length; i++) { sb.Append(hashedBytes[i].ToString("x2").ToLower()); } return sb.ToString(); } Do I want to be testing it? If so, what do you recommend? My thought process is as follows: What logic is there here. The answer is my for loop and ToString("x2") so from my understanding I want to be testing this part? I can assume Encoding.UTF8.GetBytes(plainText) works. Correct assumption? I can assume SHA256CryptoServiceProvider.ComputeHash() works. Correct assumption? I want to be only testing my logic. In this case is limited to the printing of hex encoded hash. Correct? Thanks.

    Read the article

  • Is aspect oriented programming a misnomer?

    - by glenviewjeff
    From everything I have learned about "Aspect Oriented Programming" or "Aspect Oriented Software Development," labeling it as a programming paradigm or methodology appears to be inaccurate. From what I can tell it is not a fundamental technique for programming. To nail down what is meant by "paradigm" and "methodology," please refer to the following definitions from the American Heritage Dictionary. Compare how well or poorly "Object-Oriented Programming" applies to each vs. how well AOP fits. Paradigm: A set of assumptions, concepts, values, and practices that constitutes a way of viewing reality for the community that shares them, especially in an intellectual discipline. Methodology: A body of practices, procedures, and rules used by those who work in a discipline or engage in an inquiry; a set of working methods. "Evidence-based medicine" satisfies the definition of paradigm, but "hysterectomy-based medicine" would be a misnomer because the problem space is too narrow. I am getting the impression that AOP may be misnamed because based on the "oriented-programming" suffix, AOP is alleging to be both a paradigm and a methodology in the same way "Object-Oriented Programming" is. Both of these terms (paradigm and methodology) indicate a fundamental technique, where what I understand about aspects is a technology for solving a narrow problem scope, maybe comparable in magnitude to the static variable feature of Java. If it's true that aspects solve a narrow set of problems, and AOP isn't a misnomer, then why shouldn't all programming techniques be given the "oriented-programming" suffix, such as "inheritance-oriented programming," "dependency-oriented programming," or "scope-oriented programming?"

    Read the article

  • How to implement fast search on Azure Blob?

    - by Vicky
    I am done with writing the code to upload files (text files) to azure blob storage. Now I want to provide search based on text files content. For ex. If I search for "Hello" then the name of files that contains "Hello" words should appear in search result. Here my code to search class BlobSearch { static void Main(string[] args) { string searchText = "Hello"; CloudStorageAccount account = CloudStorageAccount.Parse(azureConString); CloudBlobClient blobClient = account.CreateCloudBlobClient(); CloudBlobContainer blobContainer = blobClient.GetContainerReference("MyBlobContainer"); blobContainer.FetchAttributes(); var blobItemList = blobContainer.ListBlobs(); foreach (var item in blobItemList) { string line = string.Empty; CloudBlockBlob blockBlob = blobContainer.GetBlockBlobReference(item.Uri.ToString()); if(blockBlob.Name.Contains(".txt")) { int lineno = 1; using (var stream = blockBlob.OpenRead()) { using (StreamReader reader = new StreamReader(stream)) { while ((line = reader.ReadLine()) != null) { if (line.IndexOf(searchText) != -1) { Console.WriteLine("Line : " + lineno +" => "+ blockBlob.Name); } lineno++; } } } } } Console.WriteLine("SEARCH COMPLETE"); Console.ReadLine(); } } Above code is working but it is too slow. Is there any way to do it faster or Can improve above code.

    Read the article

  • wireless network with cable modem and access point

    - by hayri
    I have a Scientific Atlanta EPC2203 cable modem and a TP-Link TL-WA500G access point. When I connect my computer directly to modem with a CAT5e cable I have internet connection on my laptop (when i type ipconfig i see my external ip there, provided by isp). So I decided to have wireless network in the flat, allowing other devices to connect as well. I bought this wireless ap (TL-WA500G) configured Wireless security stuff, and connected it to my modem. With that configuration (by default AP has static ip of 192.168.1.254) only my computer can connect to internet over wifi, but not any other device. When I set the IP of AP to Dynamic IP (DHCP) it is the same. How should I change my configuration to enable all wifi devices to connect to internet?

    Read the article

  • Migrating BizTalk 2006 R2 to BizTalk 2010 XLANGs Issue

    - by SURESH GIRIRAJAN
    When we migrate some BizTalk apps from BizTalk 2006 R2 to BizTalk 2010, and we ran into issue when a .net component called inside the orchestration. In the .net component we are trying to retrieve some promoted property and we also checked in the BizTalk group hub to validate it was promoted, no issues there.  Only when we try to access the data into the .net component we had issue. We just moved all the assembly what we had in BizTalk 2006 R2 to BizTalk 2010, didn’t recompile anything in BizTalk 2010 environment. But looking further there is couple of new namespace added to the Microsoft.XLANGs… in BizTalk 2010 compared to BizTalk 2006 R2 caused the issue. So all we did to fix the issue is recompile the project in 2010 environment and it worked fine. So it looks like some backward compatibility issue.  public static void Load(XLANGMessage msg) {  try  {      // get the process id from context.       object ctxVal = msg.GetPropertyValue(typeof(ProcessID)); … } BizTalk 2010: Error Message in the event viewer:  The service instance will remain suspended until administratively resumed or terminated. If resumed the instance will continue from its last persisted state and may re-throw the same unexpected exception. InstanceId: 441d73d3-2e84-49d2-b6bd-7218065b5e1d Shape name: Bulk Load ShapeId: bb959e56-9221-48be-a80f-24051196617d Exception thrown from: segment 1, progress 65 Inner exception: A property cannot be associated with the type 'Tellago.Common.Schemas.ProcessId'.   Exception type: InvalidPropertyTypeException Source: Microsoft.XLANGs.Engine Target Site: Microsoft.XLANGs.RuntimeTypes.MessagePropertyDefinition _getMessagePropertyDefinition(System.Type) The following is a stack trace that identifies the location where the exception occured   at Microsoft.XLANGs.Core.XMessage._getMessagePropertyDefinition(Type propType) at Microsoft.XLANGs.Core.XMessage.GetContentProperty(Type propType) at Microsoft.XLANGs.Core.XMessage.GetPropertyValue(Type propType) at Microsoft.BizTalk.XLANGs.BTXEngine.BTXMessage.GetPropertyValue(Type propType) at Microsoft.XLANGs.Core.MessageWrapperForUserCode.GetPropertyValue(Type propType) at Tellago.Common.Components.Load(XLANGMessage msg) at Tellago.SuspensionProcess.segment1(StopConditions stopOn) at Microsoft.XLANGs.Core.SegmentScheduler.RunASegment(Segment s, StopConditions stopCond, Exception& exp)

    Read the article

  • Windows Server 2008 Active Directory DNS setup

    - by Mister IT Guru
    I have to setup a small windows network inside my bigger linux/mac infrastructure. In order to get the windows clients logging onto the domain, I have had to make the DC their primary DNS server, which seems to have worked. I would much prefer to have one DNS server running on my network, or at least one authoritative server running on the network. I have a USG 200 router/firewall and I can configure some static records for DNS, but I an not sure what I need to put in order to get DNS and AD working together, and hints and tips appreciated.

    Read the article

  • Dynamic/Adaptive RLE

    - by Lucius
    So, I'm developing a 2D, tile based game and a map maker thingy - all in Java. The problem is that recently I've been having some memory issues when about 4 maps are loaded. Each one of these maps are composed of 128x128 tiles and have 4 layers (for details and stuff). I already spent a good amount of time searching for solutions and the best thing I found was run-length enconding (RLE). It seems easy enough to use with static data, but is there a way to use it with data that is constantly changing, without a big drop in performance? In my maps, supposing I'm compressing the columns, I would have 128 rows, each with some amount of data (hopefully less than it would be without RLE). Whenever I change a tile, that whole row would have to be checked and I'm affraid that would slow down too much the production (and I'm in a somewhat tight schedule). Well, worst case scenario I work on each map individually, and save them using RLE, but it would be really nice if I could avoind that. EDIT: What I'm currently using to store the data for the tiles is a 2D array of HashMaps that use the layer as key and store the id of the tile in that position - like this: private HashMap< Integer, Integer [][]

    Read the article

< Previous Page | 413 414 415 416 417 418 419 420 421 422 423 424  | Next Page >